00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 21 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3520 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.006 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.007 The recommended git tool is: git 00:00:00.007 using credential 00000000-0000-0000-0000-000000000002 00:00:00.010 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.034 Fetching changes from the remote Git repository 00:00:00.036 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.068 Using shallow fetch with depth 1 00:00:00.068 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.068 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.103 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.148 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.148 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.049 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.063 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.076 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.076 > git config core.sparsecheckout # timeout=10 00:00:04.090 > git read-tree -mu HEAD # timeout=10 00:00:04.110 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.127 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.128 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.217 [Pipeline] Start of Pipeline 00:00:04.227 [Pipeline] library 00:00:04.229 Loading library shm_lib@master 00:00:04.229 Library shm_lib@master is cached. Copying from home. 00:00:04.242 [Pipeline] node 00:00:19.243 Still waiting to schedule task 00:00:19.244 Waiting for next available executor on ‘vagrant-vm-host’ 00:22:47.339 Running on VM-host-SM4 in /var/jenkins/workspace/raid-vg-autotest 00:22:47.341 [Pipeline] { 00:22:47.353 [Pipeline] catchError 00:22:47.355 [Pipeline] { 00:22:47.373 [Pipeline] wrap 00:22:47.382 [Pipeline] { 00:22:47.391 [Pipeline] stage 00:22:47.393 [Pipeline] { (Prologue) 00:22:47.414 [Pipeline] echo 00:22:47.416 Node: VM-host-SM4 00:22:47.423 [Pipeline] cleanWs 00:22:47.433 [WS-CLEANUP] Deleting project workspace... 00:22:47.433 [WS-CLEANUP] Deferred wipeout is used... 00:22:47.439 [WS-CLEANUP] done 00:22:47.636 [Pipeline] setCustomBuildProperty 00:22:47.728 [Pipeline] httpRequest 00:22:48.135 [Pipeline] echo 00:22:48.137 Sorcerer 10.211.164.101 is alive 00:22:48.146 [Pipeline] retry 00:22:48.148 [Pipeline] { 00:22:48.165 [Pipeline] httpRequest 00:22:48.169 HttpMethod: GET 00:22:48.170 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:22:48.170 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:22:48.171 Response Code: HTTP/1.1 200 OK 00:22:48.172 Success: Status code 200 is in the accepted range: 200,404 00:22:48.172 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:22:48.319 [Pipeline] } 00:22:48.336 [Pipeline] // retry 00:22:48.343 [Pipeline] sh 00:22:48.622 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:22:48.638 [Pipeline] httpRequest 00:22:49.034 [Pipeline] echo 00:22:49.036 Sorcerer 10.211.164.101 is alive 00:22:49.046 [Pipeline] retry 00:22:49.049 [Pipeline] { 00:22:49.063 [Pipeline] httpRequest 00:22:49.067 HttpMethod: GET 00:22:49.067 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:22:49.068 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:22:49.069 Response Code: HTTP/1.1 200 OK 00:22:49.070 Success: Status code 200 is in the accepted range: 200,404 00:22:49.070 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:22:51.313 [Pipeline] } 00:22:51.332 [Pipeline] // retry 00:22:51.341 [Pipeline] sh 00:22:51.620 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:22:54.987 [Pipeline] sh 00:22:55.266 + git -C spdk log --oneline -n5 00:22:55.266 b18e1bd62 version: v24.09.1-pre 00:22:55.266 19524ad45 version: v24.09 00:22:55.266 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:22:55.266 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:22:55.266 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:22:55.287 [Pipeline] withCredentials 00:22:55.297 > git --version # timeout=10 00:22:55.311 > git --version # 'git version 2.39.2' 00:22:55.324 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:22:55.327 [Pipeline] { 00:22:55.337 [Pipeline] retry 00:22:55.339 [Pipeline] { 00:22:55.355 [Pipeline] sh 00:22:55.632 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:22:55.642 [Pipeline] } 00:22:55.660 [Pipeline] // retry 00:22:55.665 [Pipeline] } 00:22:55.681 [Pipeline] // withCredentials 00:22:55.690 [Pipeline] httpRequest 00:22:56.091 [Pipeline] echo 00:22:56.094 Sorcerer 10.211.164.101 is alive 00:22:56.103 [Pipeline] retry 00:22:56.106 [Pipeline] { 00:22:56.118 [Pipeline] httpRequest 00:22:56.122 HttpMethod: GET 00:22:56.123 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:22:56.123 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:22:56.125 Response Code: HTTP/1.1 200 OK 00:22:56.125 Success: Status code 200 is in the accepted range: 200,404 00:22:56.126 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:22:57.356 [Pipeline] } 00:22:57.374 [Pipeline] // retry 00:22:57.382 [Pipeline] sh 00:22:57.660 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:22:59.570 [Pipeline] sh 00:22:59.847 + git -C dpdk log --oneline -n5 00:22:59.847 eeb0605f11 version: 23.11.0 00:22:59.847 238778122a doc: update release notes for 23.11 00:22:59.847 46aa6b3cfc doc: fix description of RSS features 00:22:59.847 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:22:59.847 7e421ae345 devtools: support skipping forbid rule check 00:22:59.863 [Pipeline] writeFile 00:22:59.879 [Pipeline] sh 00:23:00.160 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:23:00.173 [Pipeline] sh 00:23:00.452 + cat autorun-spdk.conf 00:23:00.452 SPDK_RUN_FUNCTIONAL_TEST=1 00:23:00.452 SPDK_RUN_ASAN=1 00:23:00.452 SPDK_RUN_UBSAN=1 00:23:00.452 SPDK_TEST_RAID=1 00:23:00.452 SPDK_TEST_NATIVE_DPDK=v23.11 00:23:00.452 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:23:00.452 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:00.458 RUN_NIGHTLY=1 00:23:00.459 [Pipeline] } 00:23:00.472 [Pipeline] // stage 00:23:00.487 [Pipeline] stage 00:23:00.489 [Pipeline] { (Run VM) 00:23:00.500 [Pipeline] sh 00:23:00.777 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:23:00.777 + echo 'Start stage prepare_nvme.sh' 00:23:00.777 Start stage prepare_nvme.sh 00:23:00.777 + [[ -n 0 ]] 00:23:00.777 + disk_prefix=ex0 00:23:00.777 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:23:00.777 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:23:00.777 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:23:00.777 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:23:00.777 ++ SPDK_RUN_ASAN=1 00:23:00.777 ++ SPDK_RUN_UBSAN=1 00:23:00.777 ++ SPDK_TEST_RAID=1 00:23:00.777 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:23:00.777 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:23:00.777 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:23:00.777 ++ RUN_NIGHTLY=1 00:23:00.777 + cd /var/jenkins/workspace/raid-vg-autotest 00:23:00.777 + nvme_files=() 00:23:00.777 + declare -A nvme_files 00:23:00.777 + backend_dir=/var/lib/libvirt/images/backends 00:23:00.777 + nvme_files['nvme.img']=5G 00:23:00.777 + nvme_files['nvme-cmb.img']=5G 00:23:00.777 + nvme_files['nvme-multi0.img']=4G 00:23:00.777 + nvme_files['nvme-multi1.img']=4G 00:23:00.777 + nvme_files['nvme-multi2.img']=4G 00:23:00.777 + nvme_files['nvme-openstack.img']=8G 00:23:00.777 + nvme_files['nvme-zns.img']=5G 00:23:00.777 + (( SPDK_TEST_NVME_PMR == 1 )) 00:23:00.777 + (( SPDK_TEST_FTL == 1 )) 00:23:00.777 + (( SPDK_TEST_NVME_FDP == 1 )) 00:23:00.777 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:23:00.777 + for nvme in "${!nvme_files[@]}" 00:23:00.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:23:00.777 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:23:00.777 + for nvme in "${!nvme_files[@]}" 00:23:00.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:23:00.777 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:23:00.777 + for nvme in "${!nvme_files[@]}" 00:23:00.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:23:00.777 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:23:00.777 + for nvme in "${!nvme_files[@]}" 00:23:00.777 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:23:01.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:23:01.035 + for nvme in "${!nvme_files[@]}" 00:23:01.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:23:01.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:23:01.035 + for nvme in "${!nvme_files[@]}" 00:23:01.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:23:01.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:23:01.035 + for nvme in "${!nvme_files[@]}" 00:23:01.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:23:01.969 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:23:01.969 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:23:01.969 + echo 'End stage prepare_nvme.sh' 00:23:01.969 End stage prepare_nvme.sh 00:23:01.979 [Pipeline] sh 00:23:02.259 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:23:02.259 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:23:02.259 00:23:02.259 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:23:02.259 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:23:02.259 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:23:02.259 HELP=0 00:23:02.259 DRY_RUN=0 00:23:02.259 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:23:02.259 NVME_DISKS_TYPE=nvme,nvme, 00:23:02.259 NVME_AUTO_CREATE=0 00:23:02.259 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:23:02.259 NVME_CMB=,, 00:23:02.259 NVME_PMR=,, 00:23:02.259 NVME_ZNS=,, 00:23:02.259 NVME_MS=,, 00:23:02.259 NVME_FDP=,, 00:23:02.259 SPDK_VAGRANT_DISTRO=fedora39 00:23:02.259 SPDK_VAGRANT_VMCPU=10 00:23:02.259 SPDK_VAGRANT_VMRAM=12288 00:23:02.259 SPDK_VAGRANT_PROVIDER=libvirt 00:23:02.259 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:23:02.259 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:23:02.259 SPDK_OPENSTACK_NETWORK=0 00:23:02.259 VAGRANT_PACKAGE_BOX=0 00:23:02.259 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:23:02.259 FORCE_DISTRO=true 00:23:02.259 VAGRANT_BOX_VERSION= 00:23:02.259 EXTRA_VAGRANTFILES= 00:23:02.259 NIC_MODEL=e1000 00:23:02.259 00:23:02.259 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:23:02.259 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:23:05.540 Bringing machine 'default' up with 'libvirt' provider... 00:23:06.106 ==> default: Creating image (snapshot of base box volume). 00:23:06.364 ==> default: Creating domain with the following settings... 00:23:06.364 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728481932_7bc195900d0e7b51fb87 00:23:06.364 ==> default: -- Domain type: kvm 00:23:06.364 ==> default: -- Cpus: 10 00:23:06.364 ==> default: -- Feature: acpi 00:23:06.364 ==> default: -- Feature: apic 00:23:06.364 ==> default: -- Feature: pae 00:23:06.364 ==> default: -- Memory: 12288M 00:23:06.364 ==> default: -- Memory Backing: hugepages: 00:23:06.364 ==> default: -- Management MAC: 00:23:06.365 ==> default: -- Loader: 00:23:06.365 ==> default: -- Nvram: 00:23:06.365 ==> default: -- Base box: spdk/fedora39 00:23:06.365 ==> default: -- Storage pool: default 00:23:06.365 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728481932_7bc195900d0e7b51fb87.img (20G) 00:23:06.365 ==> default: -- Volume Cache: default 00:23:06.365 ==> default: -- Kernel: 00:23:06.365 ==> default: -- Initrd: 00:23:06.365 ==> default: -- Graphics Type: vnc 00:23:06.365 ==> default: -- Graphics Port: -1 00:23:06.365 ==> default: -- Graphics IP: 127.0.0.1 00:23:06.365 ==> default: -- Graphics Password: Not defined 00:23:06.365 ==> default: -- Video Type: cirrus 00:23:06.365 ==> default: -- Video VRAM: 9216 00:23:06.365 ==> default: -- Sound Type: 00:23:06.365 ==> default: -- Keymap: en-us 00:23:06.365 ==> default: -- TPM Path: 00:23:06.365 ==> default: -- INPUT: type=mouse, bus=ps2 00:23:06.365 ==> default: -- Command line args: 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:23:06.365 ==> default: -> value=-drive, 00:23:06.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:23:06.365 ==> default: -> value=-drive, 00:23:06.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:06.365 ==> default: -> value=-drive, 00:23:06.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:06.365 ==> default: -> value=-drive, 00:23:06.365 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:23:06.365 ==> default: -> value=-device, 00:23:06.365 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:23:06.624 ==> default: Creating shared folders metadata... 00:23:06.624 ==> default: Starting domain. 00:23:08.595 ==> default: Waiting for domain to get an IP address... 00:23:23.539 ==> default: Waiting for SSH to become available... 00:23:25.451 ==> default: Configuring and enabling network interfaces... 00:23:30.714 default: SSH address: 192.168.121.98:22 00:23:30.714 default: SSH username: vagrant 00:23:30.714 default: SSH auth method: private key 00:23:32.616 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:23:42.603 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:23:47.926 ==> default: Mounting SSHFS shared folder... 00:23:49.825 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:23:49.825 ==> default: Checking Mount.. 00:23:51.200 ==> default: Folder Successfully Mounted! 00:23:51.200 ==> default: Running provisioner: file... 00:23:52.136 default: ~/.gitconfig => .gitconfig 00:23:52.394 00:23:52.394 SUCCESS! 00:23:52.394 00:23:52.394 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:23:52.394 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:23:52.394 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:23:52.394 00:23:52.402 [Pipeline] } 00:23:52.418 [Pipeline] // stage 00:23:52.427 [Pipeline] dir 00:23:52.428 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:23:52.430 [Pipeline] { 00:23:52.441 [Pipeline] catchError 00:23:52.443 [Pipeline] { 00:23:52.456 [Pipeline] sh 00:23:52.736 + vagrant ssh-config --host vagrant 00:23:52.736 + sed -ne /^Host/,$p 00:23:52.736 + tee ssh_conf 00:23:56.926 Host vagrant 00:23:56.926 HostName 192.168.121.98 00:23:56.926 User vagrant 00:23:56.926 Port 22 00:23:56.926 UserKnownHostsFile /dev/null 00:23:56.926 StrictHostKeyChecking no 00:23:56.926 PasswordAuthentication no 00:23:56.926 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:23:56.926 IdentitiesOnly yes 00:23:56.926 LogLevel FATAL 00:23:56.926 ForwardAgent yes 00:23:56.926 ForwardX11 yes 00:23:56.926 00:23:56.940 [Pipeline] withEnv 00:23:56.942 [Pipeline] { 00:23:56.955 [Pipeline] sh 00:23:57.237 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:23:57.237 source /etc/os-release 00:23:57.237 [[ -e /image.version ]] && img=$(< /image.version) 00:23:57.237 # Minimal, systemd-like check. 00:23:57.237 if [[ -e /.dockerenv ]]; then 00:23:57.237 # Clear garbage from the node's name: 00:23:57.237 # agt-er_autotest_547-896 -> autotest_547-896 00:23:57.237 # $HOSTNAME is the actual container id 00:23:57.237 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:23:57.237 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:23:57.237 # We can assume this is a mount from a host where container is running, 00:23:57.237 # so fetch its hostname to easily identify the target swarm worker. 00:23:57.237 container="$(< /etc/hostname) ($agent)" 00:23:57.237 else 00:23:57.237 # Fallback 00:23:57.237 container=$agent 00:23:57.237 fi 00:23:57.237 fi 00:23:57.237 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:23:57.237 00:23:57.505 [Pipeline] } 00:23:57.521 [Pipeline] // withEnv 00:23:57.530 [Pipeline] setCustomBuildProperty 00:23:57.544 [Pipeline] stage 00:23:57.546 [Pipeline] { (Tests) 00:23:57.563 [Pipeline] sh 00:23:57.844 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:23:58.115 [Pipeline] sh 00:23:58.398 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:23:58.672 [Pipeline] timeout 00:23:58.673 Timeout set to expire in 1 hr 30 min 00:23:58.675 [Pipeline] { 00:23:58.690 [Pipeline] sh 00:23:58.971 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:23:59.538 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:23:59.545 [Pipeline] sh 00:23:59.819 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:24:00.089 [Pipeline] sh 00:24:00.368 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:24:00.642 [Pipeline] sh 00:24:00.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:24:01.179 ++ readlink -f spdk_repo 00:24:01.179 + DIR_ROOT=/home/vagrant/spdk_repo 00:24:01.180 + [[ -n /home/vagrant/spdk_repo ]] 00:24:01.180 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:24:01.180 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:24:01.180 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:24:01.180 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:24:01.180 + [[ -d /home/vagrant/spdk_repo/output ]] 00:24:01.180 + [[ raid-vg-autotest == pkgdep-* ]] 00:24:01.180 + cd /home/vagrant/spdk_repo 00:24:01.180 + source /etc/os-release 00:24:01.180 ++ NAME='Fedora Linux' 00:24:01.180 ++ VERSION='39 (Cloud Edition)' 00:24:01.180 ++ ID=fedora 00:24:01.180 ++ VERSION_ID=39 00:24:01.180 ++ VERSION_CODENAME= 00:24:01.180 ++ PLATFORM_ID=platform:f39 00:24:01.180 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:24:01.180 ++ ANSI_COLOR='0;38;2;60;110;180' 00:24:01.180 ++ LOGO=fedora-logo-icon 00:24:01.180 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:24:01.180 ++ HOME_URL=https://fedoraproject.org/ 00:24:01.180 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:24:01.180 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:24:01.180 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:24:01.180 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:24:01.180 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:24:01.180 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:24:01.180 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:24:01.180 ++ SUPPORT_END=2024-11-12 00:24:01.180 ++ VARIANT='Cloud Edition' 00:24:01.180 ++ VARIANT_ID=cloud 00:24:01.180 + uname -a 00:24:01.180 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:24:01.180 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:24:01.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.819 Hugepages 00:24:01.819 node hugesize free / total 00:24:01.819 node0 1048576kB 0 / 0 00:24:01.819 node0 2048kB 0 / 0 00:24:01.819 00:24:01.819 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:01.819 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:24:01.819 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:24:01.819 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:24:01.819 + rm -f /tmp/spdk-ld-path 00:24:01.819 + source autorun-spdk.conf 00:24:01.819 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:01.819 ++ SPDK_RUN_ASAN=1 00:24:01.819 ++ SPDK_RUN_UBSAN=1 00:24:01.819 ++ SPDK_TEST_RAID=1 00:24:01.819 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:24:01.819 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:24:01.819 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:01.819 ++ RUN_NIGHTLY=1 00:24:01.819 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:24:01.819 + [[ -n '' ]] 00:24:01.819 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:24:01.819 + for M in /var/spdk/build-*-manifest.txt 00:24:01.819 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:24:01.819 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:01.819 + for M in /var/spdk/build-*-manifest.txt 00:24:01.819 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:24:01.819 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:01.819 + for M in /var/spdk/build-*-manifest.txt 00:24:01.819 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:24:01.819 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:24:01.819 ++ uname 00:24:01.819 + [[ Linux == \L\i\n\u\x ]] 00:24:01.819 + sudo dmesg -T 00:24:01.819 + sudo dmesg --clear 00:24:01.819 + dmesg_pid=6008 00:24:01.819 + [[ Fedora Linux == FreeBSD ]] 00:24:01.819 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:01.819 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:01.820 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:24:01.820 + [[ -x /usr/src/fio-static/fio ]] 00:24:01.820 + sudo dmesg -Tw 00:24:01.820 + export FIO_BIN=/usr/src/fio-static/fio 00:24:01.820 + FIO_BIN=/usr/src/fio-static/fio 00:24:01.820 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:24:01.820 + [[ ! -v VFIO_QEMU_BIN ]] 00:24:01.820 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:24:01.820 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:01.820 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:01.820 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:24:01.820 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:01.820 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:01.820 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:01.820 Test configuration: 00:24:01.820 SPDK_RUN_FUNCTIONAL_TEST=1 00:24:01.820 SPDK_RUN_ASAN=1 00:24:01.820 SPDK_RUN_UBSAN=1 00:24:01.820 SPDK_TEST_RAID=1 00:24:01.820 SPDK_TEST_NATIVE_DPDK=v23.11 00:24:01.820 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:24:01.820 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:24:01.820 RUN_NIGHTLY=1 13:53:08 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:24:01.820 13:53:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:01.820 13:53:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:24:01.820 13:53:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:01.820 13:53:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.820 13:53:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.820 13:53:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.820 13:53:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.820 13:53:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.820 13:53:08 -- paths/export.sh@5 -- $ export PATH 00:24:01.820 13:53:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.079 13:53:08 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:02.079 13:53:08 -- common/autobuild_common.sh@479 -- $ date +%s 00:24:02.079 13:53:08 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728481988.XXXXXX 00:24:02.079 13:53:08 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728481988.WYGvkU 00:24:02.079 13:53:08 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:24:02.079 13:53:08 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:24:02.079 13:53:08 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:24:02.079 13:53:08 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:24:02.079 13:53:08 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:02.079 13:53:08 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:02.079 13:53:08 -- common/autobuild_common.sh@495 -- $ get_config_params 00:24:02.079 13:53:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:24:02.079 13:53:08 -- common/autotest_common.sh@10 -- $ set +x 00:24:02.079 13:53:08 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:24:02.079 13:53:08 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:24:02.079 13:53:08 -- pm/common@17 -- $ local monitor 00:24:02.079 13:53:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:02.079 13:53:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:02.079 13:53:08 -- pm/common@25 -- $ sleep 1 00:24:02.079 13:53:08 -- pm/common@21 -- $ date +%s 00:24:02.079 13:53:08 -- pm/common@21 -- $ date +%s 00:24:02.079 13:53:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728481988 00:24:02.079 13:53:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728481988 00:24:02.079 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728481988_collect-vmstat.pm.log 00:24:02.079 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728481988_collect-cpu-load.pm.log 00:24:03.014 13:53:09 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:24:03.014 13:53:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:24:03.014 13:53:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:24:03.014 13:53:09 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:03.014 13:53:09 -- spdk/autobuild.sh@16 -- $ date -u 00:24:03.014 Wed Oct 9 01:53:09 PM UTC 2024 00:24:03.014 13:53:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:24:03.014 v24.09-rc1-9-gb18e1bd62 00:24:03.014 13:53:09 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:24:03.014 13:53:09 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:24:03.014 13:53:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:24:03.014 13:53:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:24:03.014 13:53:09 -- common/autotest_common.sh@10 -- $ set +x 00:24:03.014 ************************************ 00:24:03.014 START TEST asan 00:24:03.014 ************************************ 00:24:03.014 using asan 00:24:03.014 13:53:09 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:24:03.014 00:24:03.014 real 0m0.000s 00:24:03.014 user 0m0.000s 00:24:03.014 sys 0m0.000s 00:24:03.014 13:53:09 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:24:03.014 ************************************ 00:24:03.014 13:53:09 asan -- common/autotest_common.sh@10 -- $ set +x 00:24:03.014 END TEST asan 00:24:03.014 ************************************ 00:24:03.014 13:53:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:24:03.014 13:53:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:24:03.014 13:53:09 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:24:03.014 13:53:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:24:03.014 13:53:09 -- common/autotest_common.sh@10 -- $ set +x 00:24:03.014 ************************************ 00:24:03.014 START TEST ubsan 00:24:03.014 ************************************ 00:24:03.014 using ubsan 00:24:03.014 13:53:09 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:24:03.014 00:24:03.014 real 0m0.000s 00:24:03.014 user 0m0.000s 00:24:03.014 sys 0m0.000s 00:24:03.014 13:53:09 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:24:03.014 13:53:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:24:03.014 ************************************ 00:24:03.014 END TEST ubsan 00:24:03.014 ************************************ 00:24:03.014 13:53:09 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:24:03.014 13:53:09 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:24:03.014 13:53:09 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:24:03.014 13:53:09 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:24:03.014 13:53:09 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:24:03.014 13:53:09 -- common/autotest_common.sh@10 -- $ set +x 00:24:03.014 ************************************ 00:24:03.014 START TEST build_native_dpdk 00:24:03.014 ************************************ 00:24:03.014 13:53:09 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:24:03.014 13:53:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:24:03.273 eeb0605f11 version: 23.11.0 00:24:03.273 238778122a doc: update release notes for 23.11 00:24:03.273 46aa6b3cfc doc: fix description of RSS features 00:24:03.273 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:24:03.273 7e421ae345 devtools: support skipping forbid rule check 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:24:03.273 patching file config/rte_config.h 00:24:03.273 Hunk #1 succeeded at 60 (offset 1 line). 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:24:03.273 patching file lib/pcapng/rte_pcapng.c 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:24:03.273 13:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:24:03.273 13:53:09 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:24:08.543 The Meson build system 00:24:08.543 Version: 1.5.0 00:24:08.543 Source dir: /home/vagrant/spdk_repo/dpdk 00:24:08.543 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:24:08.543 Build type: native build 00:24:08.543 Program cat found: YES (/usr/bin/cat) 00:24:08.543 Project name: DPDK 00:24:08.543 Project version: 23.11.0 00:24:08.543 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:24:08.543 C linker for the host machine: gcc ld.bfd 2.40-14 00:24:08.543 Host machine cpu family: x86_64 00:24:08.543 Host machine cpu: x86_64 00:24:08.543 Message: ## Building in Developer Mode ## 00:24:08.543 Program pkg-config found: YES (/usr/bin/pkg-config) 00:24:08.543 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:24:08.543 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:24:08.543 Program python3 found: YES (/usr/bin/python3) 00:24:08.543 Program cat found: YES (/usr/bin/cat) 00:24:08.543 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:24:08.543 Compiler for C supports arguments -march=native: YES 00:24:08.543 Checking for size of "void *" : 8 00:24:08.543 Checking for size of "void *" : 8 (cached) 00:24:08.543 Library m found: YES 00:24:08.543 Library numa found: YES 00:24:08.543 Has header "numaif.h" : YES 00:24:08.543 Library fdt found: NO 00:24:08.543 Library execinfo found: NO 00:24:08.543 Has header "execinfo.h" : YES 00:24:08.543 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:24:08.543 Run-time dependency libarchive found: NO (tried pkgconfig) 00:24:08.543 Run-time dependency libbsd found: NO (tried pkgconfig) 00:24:08.543 Run-time dependency jansson found: NO (tried pkgconfig) 00:24:08.543 Run-time dependency openssl found: YES 3.1.1 00:24:08.543 Run-time dependency libpcap found: YES 1.10.4 00:24:08.543 Has header "pcap.h" with dependency libpcap: YES 00:24:08.543 Compiler for C supports arguments -Wcast-qual: YES 00:24:08.543 Compiler for C supports arguments -Wdeprecated: YES 00:24:08.543 Compiler for C supports arguments -Wformat: YES 00:24:08.543 Compiler for C supports arguments -Wformat-nonliteral: NO 00:24:08.543 Compiler for C supports arguments -Wformat-security: NO 00:24:08.543 Compiler for C supports arguments -Wmissing-declarations: YES 00:24:08.544 Compiler for C supports arguments -Wmissing-prototypes: YES 00:24:08.544 Compiler for C supports arguments -Wnested-externs: YES 00:24:08.544 Compiler for C supports arguments -Wold-style-definition: YES 00:24:08.544 Compiler for C supports arguments -Wpointer-arith: YES 00:24:08.544 Compiler for C supports arguments -Wsign-compare: YES 00:24:08.544 Compiler for C supports arguments -Wstrict-prototypes: YES 00:24:08.544 Compiler for C supports arguments -Wundef: YES 00:24:08.544 Compiler for C supports arguments -Wwrite-strings: YES 00:24:08.544 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:24:08.544 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:24:08.544 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:24:08.544 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:24:08.544 Program objdump found: YES (/usr/bin/objdump) 00:24:08.544 Compiler for C supports arguments -mavx512f: YES 00:24:08.544 Checking if "AVX512 checking" compiles: YES 00:24:08.544 Fetching value of define "__SSE4_2__" : 1 00:24:08.544 Fetching value of define "__AES__" : 1 00:24:08.544 Fetching value of define "__AVX__" : 1 00:24:08.544 Fetching value of define "__AVX2__" : 1 00:24:08.544 Fetching value of define "__AVX512BW__" : 1 00:24:08.544 Fetching value of define "__AVX512CD__" : 1 00:24:08.544 Fetching value of define "__AVX512DQ__" : 1 00:24:08.544 Fetching value of define "__AVX512F__" : 1 00:24:08.544 Fetching value of define "__AVX512VL__" : 1 00:24:08.544 Fetching value of define "__PCLMUL__" : 1 00:24:08.544 Fetching value of define "__RDRND__" : 1 00:24:08.544 Fetching value of define "__RDSEED__" : 1 00:24:08.544 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:24:08.544 Fetching value of define "__znver1__" : (undefined) 00:24:08.544 Fetching value of define "__znver2__" : (undefined) 00:24:08.544 Fetching value of define "__znver3__" : (undefined) 00:24:08.544 Fetching value of define "__znver4__" : (undefined) 00:24:08.544 Compiler for C supports arguments -Wno-format-truncation: YES 00:24:08.544 Message: lib/log: Defining dependency "log" 00:24:08.544 Message: lib/kvargs: Defining dependency "kvargs" 00:24:08.544 Message: lib/telemetry: Defining dependency "telemetry" 00:24:08.544 Checking for function "getentropy" : NO 00:24:08.544 Message: lib/eal: Defining dependency "eal" 00:24:08.544 Message: lib/ring: Defining dependency "ring" 00:24:08.544 Message: lib/rcu: Defining dependency "rcu" 00:24:08.544 Message: lib/mempool: Defining dependency "mempool" 00:24:08.544 Message: lib/mbuf: Defining dependency "mbuf" 00:24:08.544 Fetching value of define "__PCLMUL__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512VL__" : 1 (cached) 00:24:08.544 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:24:08.544 Compiler for C supports arguments -mpclmul: YES 00:24:08.544 Compiler for C supports arguments -maes: YES 00:24:08.544 Compiler for C supports arguments -mavx512f: YES (cached) 00:24:08.544 Compiler for C supports arguments -mavx512bw: YES 00:24:08.544 Compiler for C supports arguments -mavx512dq: YES 00:24:08.544 Compiler for C supports arguments -mavx512vl: YES 00:24:08.544 Compiler for C supports arguments -mvpclmulqdq: YES 00:24:08.544 Compiler for C supports arguments -mavx2: YES 00:24:08.544 Compiler for C supports arguments -mavx: YES 00:24:08.544 Message: lib/net: Defining dependency "net" 00:24:08.544 Message: lib/meter: Defining dependency "meter" 00:24:08.544 Message: lib/ethdev: Defining dependency "ethdev" 00:24:08.544 Message: lib/pci: Defining dependency "pci" 00:24:08.544 Message: lib/cmdline: Defining dependency "cmdline" 00:24:08.544 Message: lib/metrics: Defining dependency "metrics" 00:24:08.544 Message: lib/hash: Defining dependency "hash" 00:24:08.544 Message: lib/timer: Defining dependency "timer" 00:24:08.544 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512VL__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512CD__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:08.544 Message: lib/acl: Defining dependency "acl" 00:24:08.544 Message: lib/bbdev: Defining dependency "bbdev" 00:24:08.544 Message: lib/bitratestats: Defining dependency "bitratestats" 00:24:08.544 Run-time dependency libelf found: YES 0.191 00:24:08.544 Message: lib/bpf: Defining dependency "bpf" 00:24:08.544 Message: lib/cfgfile: Defining dependency "cfgfile" 00:24:08.544 Message: lib/compressdev: Defining dependency "compressdev" 00:24:08.544 Message: lib/cryptodev: Defining dependency "cryptodev" 00:24:08.544 Message: lib/distributor: Defining dependency "distributor" 00:24:08.544 Message: lib/dmadev: Defining dependency "dmadev" 00:24:08.544 Message: lib/efd: Defining dependency "efd" 00:24:08.544 Message: lib/eventdev: Defining dependency "eventdev" 00:24:08.544 Message: lib/dispatcher: Defining dependency "dispatcher" 00:24:08.544 Message: lib/gpudev: Defining dependency "gpudev" 00:24:08.544 Message: lib/gro: Defining dependency "gro" 00:24:08.544 Message: lib/gso: Defining dependency "gso" 00:24:08.544 Message: lib/ip_frag: Defining dependency "ip_frag" 00:24:08.544 Message: lib/jobstats: Defining dependency "jobstats" 00:24:08.544 Message: lib/latencystats: Defining dependency "latencystats" 00:24:08.544 Message: lib/lpm: Defining dependency "lpm" 00:24:08.544 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512IFMA__" : (undefined) 00:24:08.544 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:24:08.544 Message: lib/member: Defining dependency "member" 00:24:08.544 Message: lib/pcapng: Defining dependency "pcapng" 00:24:08.544 Compiler for C supports arguments -Wno-cast-qual: YES 00:24:08.544 Message: lib/power: Defining dependency "power" 00:24:08.544 Message: lib/rawdev: Defining dependency "rawdev" 00:24:08.544 Message: lib/regexdev: Defining dependency "regexdev" 00:24:08.544 Message: lib/mldev: Defining dependency "mldev" 00:24:08.544 Message: lib/rib: Defining dependency "rib" 00:24:08.544 Message: lib/reorder: Defining dependency "reorder" 00:24:08.544 Message: lib/sched: Defining dependency "sched" 00:24:08.544 Message: lib/security: Defining dependency "security" 00:24:08.544 Message: lib/stack: Defining dependency "stack" 00:24:08.544 Has header "linux/userfaultfd.h" : YES 00:24:08.544 Has header "linux/vduse.h" : YES 00:24:08.544 Message: lib/vhost: Defining dependency "vhost" 00:24:08.544 Message: lib/ipsec: Defining dependency "ipsec" 00:24:08.544 Message: lib/pdcp: Defining dependency "pdcp" 00:24:08.544 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:08.544 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:08.544 Message: lib/fib: Defining dependency "fib" 00:24:08.544 Message: lib/port: Defining dependency "port" 00:24:08.544 Message: lib/pdump: Defining dependency "pdump" 00:24:08.544 Message: lib/table: Defining dependency "table" 00:24:08.544 Message: lib/pipeline: Defining dependency "pipeline" 00:24:08.544 Message: lib/graph: Defining dependency "graph" 00:24:08.544 Message: lib/node: Defining dependency "node" 00:24:08.544 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:24:08.544 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:24:08.544 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:24:10.592 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:24:10.592 Compiler for C supports arguments -Wno-sign-compare: YES 00:24:10.592 Compiler for C supports arguments -Wno-unused-value: YES 00:24:10.592 Compiler for C supports arguments -Wno-format: YES 00:24:10.592 Compiler for C supports arguments -Wno-format-security: YES 00:24:10.592 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:24:10.592 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:24:10.592 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:24:10.592 Compiler for C supports arguments -Wno-unused-parameter: YES 00:24:10.592 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:10.592 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:10.592 Compiler for C supports arguments -mavx512f: YES (cached) 00:24:10.592 Compiler for C supports arguments -mavx512bw: YES (cached) 00:24:10.592 Compiler for C supports arguments -march=skylake-avx512: YES 00:24:10.592 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:24:10.592 Has header "sys/epoll.h" : YES 00:24:10.592 Program doxygen found: YES (/usr/local/bin/doxygen) 00:24:10.592 Configuring doxy-api-html.conf using configuration 00:24:10.592 Configuring doxy-api-man.conf using configuration 00:24:10.592 Program mandb found: YES (/usr/bin/mandb) 00:24:10.592 Program sphinx-build found: NO 00:24:10.593 Configuring rte_build_config.h using configuration 00:24:10.593 Message: 00:24:10.593 ================= 00:24:10.593 Applications Enabled 00:24:10.593 ================= 00:24:10.593 00:24:10.593 apps: 00:24:10.593 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:24:10.593 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:24:10.593 test-pmd, test-regex, test-sad, test-security-perf, 00:24:10.593 00:24:10.593 Message: 00:24:10.593 ================= 00:24:10.593 Libraries Enabled 00:24:10.593 ================= 00:24:10.593 00:24:10.593 libs: 00:24:10.593 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:24:10.593 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:24:10.593 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:24:10.593 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:24:10.593 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:24:10.593 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:24:10.593 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:24:10.593 00:24:10.593 00:24:10.593 Message: 00:24:10.593 =============== 00:24:10.593 Drivers Enabled 00:24:10.593 =============== 00:24:10.593 00:24:10.593 common: 00:24:10.593 00:24:10.593 bus: 00:24:10.593 pci, vdev, 00:24:10.593 mempool: 00:24:10.593 ring, 00:24:10.593 dma: 00:24:10.593 00:24:10.593 net: 00:24:10.593 i40e, 00:24:10.593 raw: 00:24:10.593 00:24:10.593 crypto: 00:24:10.593 00:24:10.593 compress: 00:24:10.593 00:24:10.593 regex: 00:24:10.593 00:24:10.593 ml: 00:24:10.593 00:24:10.593 vdpa: 00:24:10.593 00:24:10.593 event: 00:24:10.593 00:24:10.593 baseband: 00:24:10.593 00:24:10.593 gpu: 00:24:10.593 00:24:10.593 00:24:10.593 Message: 00:24:10.593 ================= 00:24:10.593 Content Skipped 00:24:10.593 ================= 00:24:10.593 00:24:10.593 apps: 00:24:10.593 00:24:10.593 libs: 00:24:10.593 00:24:10.593 drivers: 00:24:10.593 common/cpt: not in enabled drivers build config 00:24:10.593 common/dpaax: not in enabled drivers build config 00:24:10.593 common/iavf: not in enabled drivers build config 00:24:10.593 common/idpf: not in enabled drivers build config 00:24:10.593 common/mvep: not in enabled drivers build config 00:24:10.593 common/octeontx: not in enabled drivers build config 00:24:10.593 bus/auxiliary: not in enabled drivers build config 00:24:10.593 bus/cdx: not in enabled drivers build config 00:24:10.593 bus/dpaa: not in enabled drivers build config 00:24:10.593 bus/fslmc: not in enabled drivers build config 00:24:10.593 bus/ifpga: not in enabled drivers build config 00:24:10.593 bus/platform: not in enabled drivers build config 00:24:10.593 bus/vmbus: not in enabled drivers build config 00:24:10.593 common/cnxk: not in enabled drivers build config 00:24:10.593 common/mlx5: not in enabled drivers build config 00:24:10.593 common/nfp: not in enabled drivers build config 00:24:10.593 common/qat: not in enabled drivers build config 00:24:10.593 common/sfc_efx: not in enabled drivers build config 00:24:10.593 mempool/bucket: not in enabled drivers build config 00:24:10.593 mempool/cnxk: not in enabled drivers build config 00:24:10.593 mempool/dpaa: not in enabled drivers build config 00:24:10.593 mempool/dpaa2: not in enabled drivers build config 00:24:10.593 mempool/octeontx: not in enabled drivers build config 00:24:10.593 mempool/stack: not in enabled drivers build config 00:24:10.593 dma/cnxk: not in enabled drivers build config 00:24:10.593 dma/dpaa: not in enabled drivers build config 00:24:10.593 dma/dpaa2: not in enabled drivers build config 00:24:10.593 dma/hisilicon: not in enabled drivers build config 00:24:10.593 dma/idxd: not in enabled drivers build config 00:24:10.593 dma/ioat: not in enabled drivers build config 00:24:10.593 dma/skeleton: not in enabled drivers build config 00:24:10.593 net/af_packet: not in enabled drivers build config 00:24:10.593 net/af_xdp: not in enabled drivers build config 00:24:10.593 net/ark: not in enabled drivers build config 00:24:10.593 net/atlantic: not in enabled drivers build config 00:24:10.593 net/avp: not in enabled drivers build config 00:24:10.593 net/axgbe: not in enabled drivers build config 00:24:10.593 net/bnx2x: not in enabled drivers build config 00:24:10.593 net/bnxt: not in enabled drivers build config 00:24:10.593 net/bonding: not in enabled drivers build config 00:24:10.593 net/cnxk: not in enabled drivers build config 00:24:10.593 net/cpfl: not in enabled drivers build config 00:24:10.593 net/cxgbe: not in enabled drivers build config 00:24:10.593 net/dpaa: not in enabled drivers build config 00:24:10.593 net/dpaa2: not in enabled drivers build config 00:24:10.593 net/e1000: not in enabled drivers build config 00:24:10.593 net/ena: not in enabled drivers build config 00:24:10.593 net/enetc: not in enabled drivers build config 00:24:10.593 net/enetfec: not in enabled drivers build config 00:24:10.593 net/enic: not in enabled drivers build config 00:24:10.593 net/failsafe: not in enabled drivers build config 00:24:10.593 net/fm10k: not in enabled drivers build config 00:24:10.593 net/gve: not in enabled drivers build config 00:24:10.593 net/hinic: not in enabled drivers build config 00:24:10.593 net/hns3: not in enabled drivers build config 00:24:10.593 net/iavf: not in enabled drivers build config 00:24:10.593 net/ice: not in enabled drivers build config 00:24:10.593 net/idpf: not in enabled drivers build config 00:24:10.593 net/igc: not in enabled drivers build config 00:24:10.593 net/ionic: not in enabled drivers build config 00:24:10.593 net/ipn3ke: not in enabled drivers build config 00:24:10.593 net/ixgbe: not in enabled drivers build config 00:24:10.593 net/mana: not in enabled drivers build config 00:24:10.593 net/memif: not in enabled drivers build config 00:24:10.593 net/mlx4: not in enabled drivers build config 00:24:10.593 net/mlx5: not in enabled drivers build config 00:24:10.593 net/mvneta: not in enabled drivers build config 00:24:10.593 net/mvpp2: not in enabled drivers build config 00:24:10.593 net/netvsc: not in enabled drivers build config 00:24:10.593 net/nfb: not in enabled drivers build config 00:24:10.593 net/nfp: not in enabled drivers build config 00:24:10.593 net/ngbe: not in enabled drivers build config 00:24:10.593 net/null: not in enabled drivers build config 00:24:10.593 net/octeontx: not in enabled drivers build config 00:24:10.593 net/octeon_ep: not in enabled drivers build config 00:24:10.593 net/pcap: not in enabled drivers build config 00:24:10.593 net/pfe: not in enabled drivers build config 00:24:10.593 net/qede: not in enabled drivers build config 00:24:10.593 net/ring: not in enabled drivers build config 00:24:10.593 net/sfc: not in enabled drivers build config 00:24:10.593 net/softnic: not in enabled drivers build config 00:24:10.593 net/tap: not in enabled drivers build config 00:24:10.593 net/thunderx: not in enabled drivers build config 00:24:10.593 net/txgbe: not in enabled drivers build config 00:24:10.593 net/vdev_netvsc: not in enabled drivers build config 00:24:10.593 net/vhost: not in enabled drivers build config 00:24:10.593 net/virtio: not in enabled drivers build config 00:24:10.593 net/vmxnet3: not in enabled drivers build config 00:24:10.593 raw/cnxk_bphy: not in enabled drivers build config 00:24:10.593 raw/cnxk_gpio: not in enabled drivers build config 00:24:10.593 raw/dpaa2_cmdif: not in enabled drivers build config 00:24:10.593 raw/ifpga: not in enabled drivers build config 00:24:10.593 raw/ntb: not in enabled drivers build config 00:24:10.593 raw/skeleton: not in enabled drivers build config 00:24:10.593 crypto/armv8: not in enabled drivers build config 00:24:10.593 crypto/bcmfs: not in enabled drivers build config 00:24:10.593 crypto/caam_jr: not in enabled drivers build config 00:24:10.593 crypto/ccp: not in enabled drivers build config 00:24:10.593 crypto/cnxk: not in enabled drivers build config 00:24:10.593 crypto/dpaa_sec: not in enabled drivers build config 00:24:10.593 crypto/dpaa2_sec: not in enabled drivers build config 00:24:10.593 crypto/ipsec_mb: not in enabled drivers build config 00:24:10.593 crypto/mlx5: not in enabled drivers build config 00:24:10.593 crypto/mvsam: not in enabled drivers build config 00:24:10.593 crypto/nitrox: not in enabled drivers build config 00:24:10.593 crypto/null: not in enabled drivers build config 00:24:10.593 crypto/octeontx: not in enabled drivers build config 00:24:10.593 crypto/openssl: not in enabled drivers build config 00:24:10.593 crypto/scheduler: not in enabled drivers build config 00:24:10.593 crypto/uadk: not in enabled drivers build config 00:24:10.593 crypto/virtio: not in enabled drivers build config 00:24:10.593 compress/isal: not in enabled drivers build config 00:24:10.593 compress/mlx5: not in enabled drivers build config 00:24:10.593 compress/octeontx: not in enabled drivers build config 00:24:10.593 compress/zlib: not in enabled drivers build config 00:24:10.593 regex/mlx5: not in enabled drivers build config 00:24:10.593 regex/cn9k: not in enabled drivers build config 00:24:10.593 ml/cnxk: not in enabled drivers build config 00:24:10.593 vdpa/ifc: not in enabled drivers build config 00:24:10.593 vdpa/mlx5: not in enabled drivers build config 00:24:10.593 vdpa/nfp: not in enabled drivers build config 00:24:10.593 vdpa/sfc: not in enabled drivers build config 00:24:10.593 event/cnxk: not in enabled drivers build config 00:24:10.593 event/dlb2: not in enabled drivers build config 00:24:10.593 event/dpaa: not in enabled drivers build config 00:24:10.593 event/dpaa2: not in enabled drivers build config 00:24:10.593 event/dsw: not in enabled drivers build config 00:24:10.593 event/opdl: not in enabled drivers build config 00:24:10.593 event/skeleton: not in enabled drivers build config 00:24:10.593 event/sw: not in enabled drivers build config 00:24:10.593 event/octeontx: not in enabled drivers build config 00:24:10.593 baseband/acc: not in enabled drivers build config 00:24:10.593 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:24:10.593 baseband/fpga_lte_fec: not in enabled drivers build config 00:24:10.593 baseband/la12xx: not in enabled drivers build config 00:24:10.593 baseband/null: not in enabled drivers build config 00:24:10.593 baseband/turbo_sw: not in enabled drivers build config 00:24:10.594 gpu/cuda: not in enabled drivers build config 00:24:10.594 00:24:10.594 00:24:10.594 Build targets in project: 217 00:24:10.594 00:24:10.594 DPDK 23.11.0 00:24:10.594 00:24:10.594 User defined options 00:24:10.594 libdir : lib 00:24:10.594 prefix : /home/vagrant/spdk_repo/dpdk/build 00:24:10.594 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:24:10.594 c_link_args : 00:24:10.594 enable_docs : false 00:24:10.594 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:24:10.594 enable_kmods : false 00:24:10.594 machine : native 00:24:10.594 tests : false 00:24:10.594 00:24:10.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:24:10.594 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:24:10.594 13:53:16 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:24:10.594 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:24:10.594 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:24:10.594 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:24:10.594 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:24:10.594 [4/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:24:10.853 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:24:10.853 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:24:10.853 [7/707] Linking static target lib/librte_kvargs.a 00:24:10.853 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:24:10.853 [9/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:24:10.853 [10/707] Linking static target lib/librte_log.a 00:24:11.112 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:24:11.112 [12/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:24:11.112 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:24:11.112 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:24:11.112 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:24:11.112 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:24:11.369 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:24:11.369 [18/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:24:11.369 [19/707] Linking target lib/librte_log.so.24.0 00:24:11.369 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:24:11.627 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:24:11.627 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:24:11.627 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:24:11.627 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:24:11.627 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:24:11.627 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:24:11.627 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:24:11.885 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:24:11.885 [29/707] Linking target lib/librte_kvargs.so.24.0 00:24:11.885 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:24:11.885 [31/707] Linking static target lib/librte_telemetry.a 00:24:11.885 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:24:11.885 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:24:11.885 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:24:11.885 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:24:12.142 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:24:12.142 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:24:12.143 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:24:12.143 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:24:12.143 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:24:12.143 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:24:12.143 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:24:12.400 [43/707] Linking target lib/librte_telemetry.so.24.0 00:24:12.400 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:24:12.400 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:24:12.400 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:24:12.400 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:24:12.400 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:24:12.659 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:24:12.659 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:24:12.659 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:24:12.659 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:24:12.659 [53/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:24:12.916 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:24:12.916 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:24:12.916 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:24:12.916 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:24:12.916 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:24:12.916 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:24:12.916 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:24:12.916 [61/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:24:13.174 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:24:13.174 [63/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:24:13.174 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:24:13.174 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:24:13.174 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:24:13.174 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:24:13.174 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:24:13.431 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:24:13.431 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:24:13.431 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:24:13.431 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:24:13.431 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:24:13.431 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:24:13.431 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:24:13.431 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:24:13.431 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:24:13.689 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:24:13.689 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:24:13.689 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:24:13.948 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:24:13.948 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:24:13.948 [83/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:24:13.948 [84/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:24:13.948 [85/707] Linking static target lib/librte_ring.a 00:24:14.208 [86/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:24:14.208 [87/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:24:14.208 [88/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:24:14.208 [89/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:24:14.208 [90/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:24:14.208 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:24:14.208 [92/707] Linking static target lib/librte_eal.a 00:24:14.208 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:24:14.208 [94/707] Linking static target lib/librte_mempool.a 00:24:14.466 [95/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:24:14.724 [96/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:24:14.724 [97/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:24:14.724 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:24:14.724 [99/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:24:14.724 [100/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:24:14.982 [101/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:24:14.982 [102/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:24:14.982 [103/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:24:14.982 [104/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:24:14.982 [105/707] Linking static target lib/librte_rcu.a 00:24:15.239 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:24:15.239 [107/707] Linking static target lib/librte_net.a 00:24:15.239 [108/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:24:15.239 [109/707] Linking static target lib/librte_meter.a 00:24:15.239 [110/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:24:15.239 [111/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:24:15.497 [112/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:24:15.497 [113/707] Linking static target lib/librte_mbuf.a 00:24:15.497 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:24:15.497 [115/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:24:15.497 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:24:15.497 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:24:15.497 [118/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:24:16.064 [119/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:24:16.064 [120/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:24:16.322 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:24:16.322 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:24:16.637 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:24:16.637 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:24:16.637 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:24:16.637 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:24:16.637 [127/707] Linking static target lib/librte_pci.a 00:24:16.637 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:24:16.637 [129/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:24:16.637 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:24:16.914 [131/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:16.914 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:24:16.914 [133/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:24:16.914 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:24:16.914 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:24:16.914 [136/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:24:16.914 [137/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:24:16.914 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:24:16.914 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:24:17.171 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:24:17.171 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:24:17.171 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:24:17.171 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:24:17.171 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:24:17.171 [145/707] Linking static target lib/librte_cmdline.a 00:24:17.429 [146/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:24:17.429 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:24:17.429 [148/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:24:17.686 [149/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:24:17.686 [150/707] Linking static target lib/librte_metrics.a 00:24:17.944 [151/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:24:17.944 [152/707] Linking static target lib/librte_timer.a 00:24:17.944 [153/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:24:18.202 [154/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:24:18.202 [155/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:24:18.202 [156/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:24:18.460 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:24:18.719 [158/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:24:18.719 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:24:18.719 [160/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:24:18.977 [161/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:24:18.977 [162/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:24:19.235 [163/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:24:19.494 [164/707] Linking static target lib/librte_bitratestats.a 00:24:19.494 [165/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:24:19.752 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:24:19.752 [167/707] Linking static target lib/librte_bbdev.a 00:24:19.752 [168/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:19.752 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:24:20.010 [170/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:24:20.010 [171/707] Linking static target lib/acl/libavx2_tmp.a 00:24:20.010 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:24:20.010 [173/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:24:20.268 [174/707] Linking static target lib/librte_ethdev.a 00:24:20.268 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:24:20.268 [176/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:24:20.268 [177/707] Linking static target lib/librte_hash.a 00:24:20.527 [178/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:20.527 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:24:20.785 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:24:20.785 [181/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:24:20.785 [182/707] Linking static target lib/librte_cfgfile.a 00:24:20.785 [183/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:24:21.042 [184/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:24:21.300 [185/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:24:21.300 [186/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:24:21.300 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:24:21.300 [188/707] Linking static target lib/librte_bpf.a 00:24:21.300 [189/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:24:21.300 [190/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:24:21.300 [191/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:24:21.300 [192/707] Linking static target lib/librte_acl.a 00:24:21.558 [193/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:24:21.558 [194/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:24:21.558 [195/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:24:21.816 [196/707] Linking static target lib/librte_compressdev.a 00:24:21.816 [197/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:24:21.816 [198/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:24:22.074 [199/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:24:22.074 [200/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:24:22.074 [201/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:24:22.332 [202/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:22.332 [203/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:24:22.332 [204/707] Linking static target lib/librte_distributor.a 00:24:22.332 [205/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:24:22.332 [206/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:24:22.590 [207/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:24:22.590 [208/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:24:22.590 [209/707] Linking static target lib/librte_dmadev.a 00:24:22.856 [210/707] Linking target lib/librte_eal.so.24.0 00:24:22.856 [211/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:24:22.856 [212/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:24:22.856 [213/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:24:22.856 [214/707] Linking target lib/librte_ring.so.24.0 00:24:23.145 [215/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:24:23.145 [216/707] Linking target lib/librte_rcu.so.24.0 00:24:23.145 [217/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:23.145 [218/707] Linking target lib/librte_mempool.so.24.0 00:24:23.403 [219/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:24:23.403 [220/707] Linking target lib/librte_meter.so.24.0 00:24:23.403 [221/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:24:23.403 [222/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:24:23.403 [223/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:24:23.403 [224/707] Linking target lib/librte_mbuf.so.24.0 00:24:23.403 [225/707] Linking target lib/librte_pci.so.24.0 00:24:23.403 [226/707] Linking target lib/librte_timer.so.24.0 00:24:23.404 [227/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:24:23.662 [228/707] Linking target lib/librte_acl.so.24.0 00:24:23.662 [229/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:24:23.662 [230/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:24:23.662 [231/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:24:23.662 [232/707] Linking target lib/librte_net.so.24.0 00:24:23.662 [233/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:24:23.662 [234/707] Linking target lib/librte_cfgfile.so.24.0 00:24:23.662 [235/707] Linking target lib/librte_bbdev.so.24.0 00:24:23.662 [236/707] Linking static target lib/librte_cryptodev.a 00:24:23.662 [237/707] Linking target lib/librte_compressdev.so.24.0 00:24:23.662 [238/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:24:23.662 [239/707] Linking target lib/librte_distributor.so.24.0 00:24:23.662 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:24:23.662 [241/707] Linking target lib/librte_dmadev.so.24.0 00:24:23.920 [242/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:24:23.920 [243/707] Linking target lib/librte_cmdline.so.24.0 00:24:23.920 [244/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:24:23.920 [245/707] Linking target lib/librte_hash.so.24.0 00:24:23.920 [246/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:24:24.177 [247/707] Linking static target lib/librte_efd.a 00:24:24.177 [248/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:24:24.177 [249/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:24:24.436 [250/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:24:24.436 [251/707] Linking static target lib/librte_dispatcher.a 00:24:24.436 [252/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:24:24.436 [253/707] Linking target lib/librte_efd.so.24.0 00:24:24.694 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:24:24.694 [255/707] Linking static target lib/librte_gpudev.a 00:24:24.694 [256/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:24:24.953 [257/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:24:24.953 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:24:24.953 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:24:24.953 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:24:25.212 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:25.212 [262/707] Linking target lib/librte_cryptodev.so.24.0 00:24:25.470 [263/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:24:25.470 [264/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:24:25.470 [265/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:24:25.470 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:24:25.470 [267/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:24:25.470 [268/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:25.470 [269/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:24:25.729 [270/707] Linking static target lib/librte_gro.a 00:24:25.729 [271/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:24:25.729 [272/707] Linking target lib/librte_gpudev.so.24.0 00:24:25.729 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:24:25.729 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:24:25.729 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:24:25.987 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:24:25.987 [277/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:24:25.987 [278/707] Linking static target lib/librte_gso.a 00:24:25.987 [279/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:24:26.246 [280/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:26.246 [281/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:24:26.246 [282/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:24:26.246 [283/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:24:26.246 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:24:26.246 [285/707] Linking static target lib/librte_jobstats.a 00:24:26.246 [286/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:24:26.246 [287/707] Linking target lib/librte_ethdev.so.24.0 00:24:26.504 [288/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:24:26.504 [289/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:24:26.504 [290/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:24:26.504 [291/707] Linking static target lib/librte_eventdev.a 00:24:26.504 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:24:26.505 [293/707] Linking target lib/librte_bpf.so.24.0 00:24:26.505 [294/707] Linking target lib/librte_metrics.so.24.0 00:24:26.505 [295/707] Linking target lib/librte_gro.so.24.0 00:24:26.505 [296/707] Linking target lib/librte_gso.so.24.0 00:24:26.505 [297/707] Linking static target lib/librte_ip_frag.a 00:24:26.505 [298/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:24:26.505 [299/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:24:26.763 [300/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:26.763 [301/707] Linking target lib/librte_bitratestats.so.24.0 00:24:26.763 [302/707] Linking target lib/librte_jobstats.so.24.0 00:24:26.763 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:24:26.763 [304/707] Linking static target lib/librte_latencystats.a 00:24:26.763 [305/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:24:26.763 [306/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:24:26.763 [307/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:24:26.763 [308/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:24:27.021 [309/707] Linking target lib/librte_ip_frag.so.24.0 00:24:27.021 [310/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:24:27.021 [311/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:27.021 [312/707] Linking target lib/librte_latencystats.so.24.0 00:24:27.021 [313/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:24:27.021 [314/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:24:27.021 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:24:27.279 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:24:27.279 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:24:27.279 [318/707] Linking static target lib/librte_lpm.a 00:24:27.537 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:24:27.537 [320/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:24:27.537 [321/707] Linking static target lib/librte_pcapng.a 00:24:27.537 [322/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:24:27.795 [323/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:24:27.795 [324/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:24:27.795 [325/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:24:27.795 [326/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:24:27.795 [327/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:24:27.795 [328/707] Linking target lib/librte_lpm.so.24.0 00:24:28.054 [329/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:24:28.054 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:24:28.054 [331/707] Linking target lib/librte_pcapng.so.24.0 00:24:28.054 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:24:28.054 [333/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:24:28.312 [334/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:24:28.312 [335/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:24:28.312 [336/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:24:28.312 [337/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:24:28.312 [338/707] Linking static target lib/librte_regexdev.a 00:24:28.312 [339/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:24:28.312 [340/707] Linking static target lib/librte_rawdev.a 00:24:28.312 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:24:28.570 [342/707] Linking static target lib/librte_power.a 00:24:28.570 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:24:28.570 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:24:28.829 [345/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:24:28.829 [346/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:24:28.829 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:24:28.829 [348/707] Linking static target lib/librte_member.a 00:24:28.829 [349/707] Linking static target lib/librte_mldev.a 00:24:28.829 [350/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.087 [351/707] Linking target lib/librte_rawdev.so.24.0 00:24:29.087 [352/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:24:29.087 [353/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.087 [354/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:24:29.346 [355/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.346 [356/707] Linking target lib/librte_eventdev.so.24.0 00:24:29.346 [357/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.346 [358/707] Linking target lib/librte_regexdev.so.24.0 00:24:29.346 [359/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.346 [360/707] Linking target lib/librte_power.so.24.0 00:24:29.346 [361/707] Linking target lib/librte_member.so.24.0 00:24:29.346 [362/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:24:29.346 [363/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:24:29.346 [364/707] Linking target lib/librte_dispatcher.so.24.0 00:24:29.604 [365/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:24:29.604 [366/707] Linking static target lib/librte_reorder.a 00:24:29.604 [367/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:24:29.604 [368/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:24:29.604 [369/707] Linking static target lib/librte_rib.a 00:24:29.604 [370/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:24:29.604 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:24:29.604 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:24:29.863 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:24:29.863 [374/707] Linking static target lib/librte_stack.a 00:24:29.863 [375/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:24:29.863 [376/707] Linking target lib/librte_reorder.so.24.0 00:24:29.863 [377/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:24:29.863 [378/707] Linking static target lib/librte_security.a 00:24:30.121 [379/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:24:30.121 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:24:30.121 [381/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:24:30.121 [382/707] Linking target lib/librte_stack.so.24.0 00:24:30.121 [383/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:24:30.121 [384/707] Linking target lib/librte_rib.so.24.0 00:24:30.379 [385/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:24:30.379 [386/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:30.379 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:24:30.379 [388/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:24:30.379 [389/707] Linking target lib/librte_mldev.so.24.0 00:24:30.379 [390/707] Linking target lib/librte_security.so.24.0 00:24:30.379 [391/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:24:30.379 [392/707] Linking static target lib/librte_sched.a 00:24:30.638 [393/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:24:30.638 [394/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:24:30.896 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:24:30.896 [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:24:31.154 [397/707] Linking target lib/librte_sched.so.24.0 00:24:31.154 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:24:31.154 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:24:31.154 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:24:31.412 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:24:31.412 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:24:31.670 [403/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:24:31.939 [404/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:24:31.939 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:24:31.939 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:24:31.939 [407/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:24:32.196 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:24:32.455 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:24:32.455 [410/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:24:32.455 [411/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:24:32.455 [412/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:24:32.712 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:24:32.712 [414/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:24:32.712 [415/707] Linking static target lib/librte_ipsec.a 00:24:32.970 [416/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:24:32.970 [417/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:24:33.228 [418/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:24:33.228 [419/707] Linking target lib/librte_ipsec.so.24.0 00:24:33.228 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:24:33.228 [421/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:24:33.487 [422/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:24:33.487 [423/707] Linking static target lib/librte_fib.a 00:24:33.487 [424/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:24:33.487 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:24:33.745 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:24:33.745 [427/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:24:33.745 [428/707] Linking static target lib/librte_pdcp.a 00:24:34.004 [429/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:24:34.004 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:24:34.004 [431/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:24:34.004 [432/707] Linking target lib/librte_fib.so.24.0 00:24:34.262 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:24:34.262 [434/707] Linking target lib/librte_pdcp.so.24.0 00:24:34.521 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:24:34.521 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:24:34.521 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:24:34.778 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:24:34.778 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:24:34.778 [440/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:24:35.037 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:24:35.037 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:24:35.297 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:24:35.297 [444/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:24:35.297 [445/707] Linking static target lib/librte_port.a 00:24:35.297 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:24:35.297 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:24:35.560 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:24:35.560 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:24:35.560 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:24:35.560 [451/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:24:35.560 [452/707] Linking static target lib/librte_pdump.a 00:24:35.830 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:24:35.830 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:24:35.830 [455/707] Linking target lib/librte_port.so.24.0 00:24:36.088 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:24:36.088 [457/707] Linking target lib/librte_pdump.so.24.0 00:24:36.088 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:24:36.088 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:24:36.088 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:24:36.088 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:24:36.346 [462/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:24:36.346 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:24:36.346 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:24:36.605 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:24:36.605 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:24:36.605 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:24:36.605 [468/707] Linking static target lib/librte_table.a 00:24:36.605 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:24:36.863 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:24:37.122 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:24:37.122 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:24:37.122 [473/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:24:37.381 [474/707] Linking target lib/librte_table.so.24.0 00:24:37.381 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:24:37.381 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:24:37.381 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:24:37.639 [478/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:24:37.898 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:24:37.898 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:24:37.898 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:24:37.898 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:24:38.157 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:24:38.416 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:24:38.416 [485/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:24:38.416 [486/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:24:38.416 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:24:38.674 [488/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:24:38.674 [489/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:24:38.674 [490/707] Linking static target lib/librte_graph.a 00:24:38.936 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:24:39.520 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:24:39.520 [493/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:24:39.520 [494/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:24:39.520 [495/707] Linking target lib/librte_graph.so.24.0 00:24:39.520 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:24:39.520 [497/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:24:39.521 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:24:39.521 [499/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:24:39.521 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:24:39.784 [501/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:24:39.784 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:24:39.784 [503/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:24:39.784 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:24:40.042 [505/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:24:40.042 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:24:40.301 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:24:40.301 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:24:40.301 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:24:40.301 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:24:40.301 [511/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:24:40.301 [512/707] Linking static target lib/librte_node.a 00:24:40.558 [513/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:24:40.558 [514/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:24:40.558 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:24:40.558 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:24:40.816 [517/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:24:40.816 [518/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:24:40.816 [519/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:40.816 [520/707] Linking static target drivers/librte_bus_pci.a 00:24:40.816 [521/707] Linking target lib/librte_node.so.24.0 00:24:40.816 [522/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:40.816 [523/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:24:41.075 [524/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:24:41.075 [525/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:41.075 [526/707] Linking static target drivers/librte_bus_vdev.a 00:24:41.075 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:41.075 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:24:41.333 [529/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:24:41.333 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:41.333 [531/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:24:41.333 [532/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:24:41.333 [533/707] Linking target drivers/librte_bus_vdev.so.24.0 00:24:41.333 [534/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:41.333 [535/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:24:41.333 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:24:41.591 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:24:41.591 [538/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:24:41.591 [539/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:41.591 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:41.591 [541/707] Linking static target drivers/librte_mempool_ring.a 00:24:41.591 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:24:41.591 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:24:41.873 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:24:42.448 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:24:42.448 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:24:42.448 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:24:42.707 [548/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:24:42.707 [549/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:24:43.275 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:24:43.275 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:24:43.275 [552/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:24:43.275 [553/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:24:43.534 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:24:43.534 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:24:43.792 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:24:43.792 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:24:43.792 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:24:44.051 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:24:44.309 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:24:44.309 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:24:44.568 [562/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:24:44.568 [563/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:24:44.826 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:24:44.826 [565/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:24:44.826 [566/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:24:45.085 [567/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:24:45.085 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:24:45.085 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:24:45.085 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:24:45.343 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:24:45.343 [572/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:24:45.343 [573/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:24:45.601 [574/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:24:45.601 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:24:45.859 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:24:45.859 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:24:45.859 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:24:45.859 [579/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:24:45.859 [580/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:24:46.118 [581/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:24:46.118 [582/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:24:46.377 [583/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:24:46.377 [584/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:24:46.377 [585/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:24:46.377 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:24:46.377 [587/707] Linking static target drivers/librte_net_i40e.a 00:24:46.377 [588/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:24:46.377 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:24:46.635 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:24:46.894 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:24:47.153 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:24:47.153 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:24:47.153 [594/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:24:47.153 [595/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:24:47.153 [596/707] Linking target drivers/librte_net_i40e.so.24.0 00:24:47.153 [597/707] Linking static target lib/librte_vhost.a 00:24:47.411 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:24:47.411 [599/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:24:47.411 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:24:47.670 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:24:47.670 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:24:47.928 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:24:47.928 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:24:48.187 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:24:48.187 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:24:48.187 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:24:48.445 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:24:48.445 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:24:48.445 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:24:48.445 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:24:48.704 [612/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:24:48.704 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:24:48.704 [614/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:24:48.704 [615/707] Linking target lib/librte_vhost.so.24.0 00:24:48.962 [616/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:24:48.962 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:24:49.220 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:24:49.220 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:24:49.478 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:24:50.043 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:24:50.043 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:24:50.043 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:24:50.043 [624/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:24:50.302 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:24:50.302 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:24:50.302 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:24:50.302 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:24:50.559 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:24:50.559 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:24:50.559 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:24:50.559 [632/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:24:50.559 [633/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:24:50.818 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:24:50.818 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:24:51.076 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:24:51.076 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:24:51.076 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:24:51.076 [639/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:24:51.334 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:24:51.335 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:24:51.335 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:24:51.593 [643/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:24:51.593 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:24:51.593 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:24:51.593 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:24:51.593 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:24:51.853 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:24:51.853 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:24:51.853 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:24:52.113 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:24:52.113 [652/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:24:52.113 [653/707] Linking static target lib/librte_pipeline.a 00:24:52.113 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:24:52.371 [655/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:24:52.371 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:24:52.371 [657/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:24:52.371 [658/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:24:52.630 [659/707] Linking target app/dpdk-dumpcap 00:24:52.631 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:24:52.890 [661/707] Linking target app/dpdk-graph 00:24:52.890 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:24:52.890 [663/707] Linking target app/dpdk-pdump 00:24:52.890 [664/707] Linking target app/dpdk-proc-info 00:24:52.890 [665/707] Linking target app/dpdk-test-acl 00:24:53.149 [666/707] Linking target app/dpdk-test-cmdline 00:24:53.149 [667/707] Linking target app/dpdk-test-bbdev 00:24:53.409 [668/707] Linking target app/dpdk-test-crypto-perf 00:24:53.409 [669/707] Linking target app/dpdk-test-dma-perf 00:24:53.409 [670/707] Linking target app/dpdk-test-compress-perf 00:24:53.409 [671/707] Linking target app/dpdk-test-fib 00:24:53.409 [672/707] Linking target app/dpdk-test-eventdev 00:24:53.668 [673/707] Linking target app/dpdk-test-flow-perf 00:24:53.668 [674/707] Linking target app/dpdk-test-gpudev 00:24:53.668 [675/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:24:53.668 [676/707] Linking target app/dpdk-test-mldev 00:24:53.926 [677/707] Linking target app/dpdk-test-pipeline 00:24:53.926 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:24:54.184 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:24:54.184 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:24:54.184 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:24:54.443 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:24:54.443 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:24:54.443 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:24:54.707 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:24:54.978 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:24:54.978 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:24:54.978 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:24:54.978 [689/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.237 [690/707] Linking target lib/librte_pipeline.so.24.0 00:24:55.237 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:24:55.237 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:24:55.496 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:24:55.496 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:24:55.754 [695/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:24:55.755 [696/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:24:56.014 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:24:56.014 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:24:56.014 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:24:56.273 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:24:56.273 [701/707] Linking target app/dpdk-test-sad 00:24:56.273 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:24:56.273 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:24:56.531 [704/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:24:56.531 [705/707] Linking target app/dpdk-test-regex 00:24:56.791 [706/707] Linking target app/dpdk-testpmd 00:24:56.791 [707/707] Linking target app/dpdk-test-security-perf 00:24:56.791 13:54:03 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:24:56.791 13:54:03 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:24:56.791 13:54:03 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:24:57.050 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:24:57.050 [0/1] Installing files. 00:24:57.312 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:24:57.312 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.313 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.314 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.315 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.316 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:24:57.317 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:24:57.317 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.317 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.576 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.577 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:24:57.836 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:24:57.836 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:24:57.836 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:24:57.836 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:24:57.836 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.836 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:57.837 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.099 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.100 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:24:58.101 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:24:58.101 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:24:58.101 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:24:58.101 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:24:58.101 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:24:58.101 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:24:58.101 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:24:58.101 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:24:58.101 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:24:58.101 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:24:58.101 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:24:58.101 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:24:58.101 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:24:58.101 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:24:58.101 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:24:58.101 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:24:58.101 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:24:58.101 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:24:58.101 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:24:58.101 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:24:58.101 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:24:58.101 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:24:58.101 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:24:58.101 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:24:58.101 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:24:58.101 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:24:58.101 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:24:58.101 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:24:58.101 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:24:58.101 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:24:58.101 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:24:58.101 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:24:58.101 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:24:58.101 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:24:58.101 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:24:58.101 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:24:58.101 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:24:58.101 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:24:58.101 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:24:58.101 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:24:58.101 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:24:58.101 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:24:58.101 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:24:58.101 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:24:58.101 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:24:58.101 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:24:58.101 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:24:58.101 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:24:58.101 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:24:58.101 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:24:58.101 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:24:58.101 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:24:58.102 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:24:58.102 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:24:58.102 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:24:58.102 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:24:58.102 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:24:58.102 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:24:58.102 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:24:58.102 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:24:58.102 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:24:58.102 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:24:58.102 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:24:58.102 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:24:58.102 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:24:58.102 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:24:58.102 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:24:58.102 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:24:58.102 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:24:58.102 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:24:58.102 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:24:58.102 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:24:58.102 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:24:58.102 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:24:58.102 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:24:58.102 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:24:58.102 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:24:58.102 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:24:58.102 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:24:58.102 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:24:58.102 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:24:58.102 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:24:58.102 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:24:58.102 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:24:58.102 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:24:58.102 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:24:58.102 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:24:58.102 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:24:58.102 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:24:58.102 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:24:58.102 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:24:58.102 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:24:58.102 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:24:58.102 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:24:58.102 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:24:58.102 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:24:58.102 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:24:58.102 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:24:58.102 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:24:58.102 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:24:58.102 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:24:58.102 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:24:58.102 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:24:58.102 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:24:58.102 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:24:58.102 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:24:58.102 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:24:58.102 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:24:58.102 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:24:58.102 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:24:58.102 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:24:58.102 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:24:58.102 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:24:58.102 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:24:58.102 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:24:58.102 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:24:58.102 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:24:58.102 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:24:58.102 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:24:58.102 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:24:58.102 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:24:58.102 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:24:58.102 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:24:58.102 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:24:58.102 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:24:58.102 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:24:58.102 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:24:58.102 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:24:58.102 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:24:58.102 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:24:58.102 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:24:58.102 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:24:58.102 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:24:58.102 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:24:58.102 13:54:04 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:24:58.102 13:54:04 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:58.102 00:24:58.102 real 0m55.008s 00:24:58.102 user 6m14.543s 00:24:58.102 sys 1m17.485s 00:24:58.102 13:54:04 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:24:58.102 13:54:04 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:24:58.102 ************************************ 00:24:58.102 END TEST build_native_dpdk 00:24:58.102 ************************************ 00:24:58.102 13:54:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:24:58.102 13:54:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:24:58.102 13:54:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:24:58.361 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:24:58.361 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:24:58.361 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:24:58.361 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:58.927 Using 'verbs' RDMA provider 00:25:15.299 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:25:30.232 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:25:30.232 Creating mk/config.mk...done. 00:25:30.232 Creating mk/cc.flags.mk...done. 00:25:30.232 Type 'make' to build. 00:25:30.232 13:54:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:25:30.232 13:54:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:25:30.232 13:54:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:25:30.232 13:54:35 -- common/autotest_common.sh@10 -- $ set +x 00:25:30.232 ************************************ 00:25:30.232 START TEST make 00:25:30.232 ************************************ 00:25:30.232 13:54:35 make -- common/autotest_common.sh@1125 -- $ make -j10 00:25:30.232 make[1]: Nothing to be done for 'all'. 00:26:16.913 CC lib/ut/ut.o 00:26:16.913 CC lib/log/log.o 00:26:16.914 CC lib/log/log_flags.o 00:26:16.914 CC lib/log/log_deprecated.o 00:26:16.914 CC lib/ut_mock/mock.o 00:26:17.176 LIB libspdk_ut_mock.a 00:26:17.176 LIB libspdk_log.a 00:26:17.176 SO libspdk_ut_mock.so.6.0 00:26:17.176 LIB libspdk_ut.a 00:26:17.176 SO libspdk_log.so.7.0 00:26:17.176 SO libspdk_ut.so.2.0 00:26:17.176 SYMLINK libspdk_ut_mock.so 00:26:17.176 SYMLINK libspdk_log.so 00:26:17.176 SYMLINK libspdk_ut.so 00:26:17.434 CC lib/util/base64.o 00:26:17.434 CXX lib/trace_parser/trace.o 00:26:17.434 CC lib/util/cpuset.o 00:26:17.434 CC lib/util/crc16.o 00:26:17.434 CC lib/util/crc32.o 00:26:17.434 CC lib/util/bit_array.o 00:26:17.434 CC lib/util/crc32c.o 00:26:17.434 CC lib/dma/dma.o 00:26:17.434 CC lib/ioat/ioat.o 00:26:17.692 CC lib/vfio_user/host/vfio_user_pci.o 00:26:17.692 CC lib/vfio_user/host/vfio_user.o 00:26:17.692 CC lib/util/crc32_ieee.o 00:26:17.692 CC lib/util/crc64.o 00:26:17.692 CC lib/util/dif.o 00:26:17.950 LIB libspdk_dma.a 00:26:17.950 CC lib/util/fd.o 00:26:17.950 CC lib/util/fd_group.o 00:26:17.950 SO libspdk_dma.so.5.0 00:26:17.950 CC lib/util/file.o 00:26:17.950 CC lib/util/hexlify.o 00:26:17.950 LIB libspdk_ioat.a 00:26:17.950 SYMLINK libspdk_dma.so 00:26:17.950 CC lib/util/iov.o 00:26:17.950 CC lib/util/math.o 00:26:17.950 SO libspdk_ioat.so.7.0 00:26:17.950 CC lib/util/net.o 00:26:17.950 LIB libspdk_vfio_user.a 00:26:17.950 SYMLINK libspdk_ioat.so 00:26:17.950 CC lib/util/pipe.o 00:26:17.950 SO libspdk_vfio_user.so.5.0 00:26:18.208 CC lib/util/strerror_tls.o 00:26:18.208 CC lib/util/string.o 00:26:18.208 SYMLINK libspdk_vfio_user.so 00:26:18.208 CC lib/util/uuid.o 00:26:18.208 CC lib/util/xor.o 00:26:18.208 CC lib/util/zipf.o 00:26:18.208 CC lib/util/md5.o 00:26:18.466 LIB libspdk_util.a 00:26:18.466 SO libspdk_util.so.10.0 00:26:18.724 LIB libspdk_trace_parser.a 00:26:18.724 SO libspdk_trace_parser.so.6.0 00:26:18.724 SYMLINK libspdk_util.so 00:26:18.981 SYMLINK libspdk_trace_parser.so 00:26:18.982 CC lib/rdma_provider/common.o 00:26:18.982 CC lib/rdma_provider/rdma_provider_verbs.o 00:26:18.982 CC lib/conf/conf.o 00:26:18.982 CC lib/env_dpdk/env.o 00:26:18.982 CC lib/json/json_parse.o 00:26:18.982 CC lib/json/json_util.o 00:26:18.982 CC lib/json/json_write.o 00:26:18.982 CC lib/rdma_utils/rdma_utils.o 00:26:18.982 CC lib/vmd/vmd.o 00:26:18.982 CC lib/idxd/idxd.o 00:26:19.239 CC lib/idxd/idxd_user.o 00:26:19.239 LIB libspdk_rdma_provider.a 00:26:19.239 LIB libspdk_conf.a 00:26:19.239 SO libspdk_rdma_provider.so.6.0 00:26:19.239 SO libspdk_conf.so.6.0 00:26:19.239 CC lib/idxd/idxd_kernel.o 00:26:19.239 LIB libspdk_rdma_utils.a 00:26:19.239 SYMLINK libspdk_rdma_provider.so 00:26:19.239 CC lib/vmd/led.o 00:26:19.239 LIB libspdk_json.a 00:26:19.239 CC lib/env_dpdk/memory.o 00:26:19.239 SYMLINK libspdk_conf.so 00:26:19.239 CC lib/env_dpdk/pci.o 00:26:19.239 SO libspdk_rdma_utils.so.1.0 00:26:19.239 SO libspdk_json.so.6.0 00:26:19.496 SYMLINK libspdk_rdma_utils.so 00:26:19.496 CC lib/env_dpdk/init.o 00:26:19.496 SYMLINK libspdk_json.so 00:26:19.496 CC lib/env_dpdk/threads.o 00:26:19.496 CC lib/env_dpdk/pci_ioat.o 00:26:19.496 CC lib/env_dpdk/pci_virtio.o 00:26:19.496 CC lib/env_dpdk/pci_vmd.o 00:26:19.496 CC lib/env_dpdk/pci_idxd.o 00:26:19.496 CC lib/env_dpdk/pci_event.o 00:26:19.496 CC lib/jsonrpc/jsonrpc_server.o 00:26:19.756 CC lib/env_dpdk/sigbus_handler.o 00:26:19.756 CC lib/env_dpdk/pci_dpdk.o 00:26:19.756 CC lib/env_dpdk/pci_dpdk_2207.o 00:26:19.756 CC lib/env_dpdk/pci_dpdk_2211.o 00:26:19.756 LIB libspdk_vmd.a 00:26:19.756 LIB libspdk_idxd.a 00:26:19.756 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:26:19.756 SO libspdk_vmd.so.6.0 00:26:19.756 CC lib/jsonrpc/jsonrpc_client.o 00:26:19.756 SO libspdk_idxd.so.12.1 00:26:20.016 SYMLINK libspdk_vmd.so 00:26:20.016 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:26:20.016 SYMLINK libspdk_idxd.so 00:26:20.274 LIB libspdk_jsonrpc.a 00:26:20.274 SO libspdk_jsonrpc.so.6.0 00:26:20.274 SYMLINK libspdk_jsonrpc.so 00:26:20.531 CC lib/rpc/rpc.o 00:26:20.788 LIB libspdk_env_dpdk.a 00:26:20.788 SO libspdk_env_dpdk.so.15.0 00:26:20.788 LIB libspdk_rpc.a 00:26:20.788 SO libspdk_rpc.so.6.0 00:26:21.045 SYMLINK libspdk_rpc.so 00:26:21.045 SYMLINK libspdk_env_dpdk.so 00:26:21.045 CC lib/keyring/keyring_rpc.o 00:26:21.045 CC lib/keyring/keyring.o 00:26:21.045 CC lib/trace/trace_flags.o 00:26:21.045 CC lib/trace/trace.o 00:26:21.045 CC lib/trace/trace_rpc.o 00:26:21.045 CC lib/notify/notify_rpc.o 00:26:21.045 CC lib/notify/notify.o 00:26:21.304 LIB libspdk_notify.a 00:26:21.304 SO libspdk_notify.so.6.0 00:26:21.304 LIB libspdk_trace.a 00:26:21.563 SYMLINK libspdk_notify.so 00:26:21.563 SO libspdk_trace.so.11.0 00:26:21.563 LIB libspdk_keyring.a 00:26:21.563 SO libspdk_keyring.so.2.0 00:26:21.563 SYMLINK libspdk_trace.so 00:26:21.563 SYMLINK libspdk_keyring.so 00:26:21.822 CC lib/thread/thread.o 00:26:21.822 CC lib/thread/iobuf.o 00:26:21.822 CC lib/sock/sock_rpc.o 00:26:21.822 CC lib/sock/sock.o 00:26:22.388 LIB libspdk_sock.a 00:26:22.388 SO libspdk_sock.so.10.0 00:26:22.646 SYMLINK libspdk_sock.so 00:26:22.904 CC lib/nvme/nvme_ctrlr_cmd.o 00:26:22.905 CC lib/nvme/nvme_ns.o 00:26:22.905 CC lib/nvme/nvme_ctrlr.o 00:26:22.905 CC lib/nvme/nvme_fabric.o 00:26:22.905 CC lib/nvme/nvme_qpair.o 00:26:22.905 CC lib/nvme/nvme_pcie.o 00:26:22.905 CC lib/nvme/nvme_pcie_common.o 00:26:22.905 CC lib/nvme/nvme_ns_cmd.o 00:26:22.905 CC lib/nvme/nvme.o 00:26:23.838 CC lib/nvme/nvme_quirks.o 00:26:23.838 LIB libspdk_thread.a 00:26:23.838 CC lib/nvme/nvme_transport.o 00:26:23.838 CC lib/nvme/nvme_discovery.o 00:26:23.838 SO libspdk_thread.so.10.1 00:26:23.838 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:26:23.838 SYMLINK libspdk_thread.so 00:26:23.838 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:24.096 CC lib/nvme/nvme_tcp.o 00:26:24.096 CC lib/accel/accel.o 00:26:24.096 CC lib/blob/blobstore.o 00:26:24.354 CC lib/blob/request.o 00:26:24.354 CC lib/blob/zeroes.o 00:26:24.612 CC lib/blob/blob_bs_dev.o 00:26:24.612 CC lib/nvme/nvme_opal.o 00:26:24.612 CC lib/accel/accel_rpc.o 00:26:24.612 CC lib/init/json_config.o 00:26:24.870 CC lib/virtio/virtio.o 00:26:24.870 CC lib/accel/accel_sw.o 00:26:24.870 CC lib/fsdev/fsdev.o 00:26:24.870 CC lib/nvme/nvme_io_msg.o 00:26:24.870 CC lib/init/subsystem.o 00:26:25.129 CC lib/virtio/virtio_vhost_user.o 00:26:25.129 CC lib/virtio/virtio_vfio_user.o 00:26:25.129 CC lib/init/subsystem_rpc.o 00:26:25.387 CC lib/virtio/virtio_pci.o 00:26:25.387 CC lib/init/rpc.o 00:26:25.387 LIB libspdk_accel.a 00:26:25.387 CC lib/fsdev/fsdev_io.o 00:26:25.645 SO libspdk_accel.so.16.0 00:26:25.645 CC lib/nvme/nvme_poll_group.o 00:26:25.645 LIB libspdk_init.a 00:26:25.645 CC lib/nvme/nvme_zns.o 00:26:25.645 LIB libspdk_virtio.a 00:26:25.645 SYMLINK libspdk_accel.so 00:26:25.645 CC lib/nvme/nvme_stubs.o 00:26:25.645 SO libspdk_init.so.6.0 00:26:25.645 SO libspdk_virtio.so.7.0 00:26:25.645 SYMLINK libspdk_init.so 00:26:25.645 CC lib/fsdev/fsdev_rpc.o 00:26:25.902 SYMLINK libspdk_virtio.so 00:26:25.902 CC lib/nvme/nvme_auth.o 00:26:25.902 CC lib/nvme/nvme_cuse.o 00:26:25.902 CC lib/bdev/bdev.o 00:26:25.902 CC lib/bdev/bdev_rpc.o 00:26:25.903 LIB libspdk_fsdev.a 00:26:25.903 CC lib/event/app.o 00:26:25.903 SO libspdk_fsdev.so.1.0 00:26:26.161 SYMLINK libspdk_fsdev.so 00:26:26.161 CC lib/bdev/bdev_zone.o 00:26:26.161 CC lib/bdev/part.o 00:26:26.161 CC lib/bdev/scsi_nvme.o 00:26:26.419 CC lib/nvme/nvme_rdma.o 00:26:26.419 CC lib/event/reactor.o 00:26:26.419 CC lib/event/log_rpc.o 00:26:26.419 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:26:26.677 CC lib/event/app_rpc.o 00:26:26.677 CC lib/event/scheduler_static.o 00:26:26.935 LIB libspdk_event.a 00:26:26.935 SO libspdk_event.so.14.0 00:26:27.193 SYMLINK libspdk_event.so 00:26:27.193 LIB libspdk_fuse_dispatcher.a 00:26:27.452 SO libspdk_fuse_dispatcher.so.1.0 00:26:27.452 SYMLINK libspdk_fuse_dispatcher.so 00:26:28.020 LIB libspdk_nvme.a 00:26:28.020 SO libspdk_nvme.so.14.0 00:26:28.587 SYMLINK libspdk_nvme.so 00:26:28.587 LIB libspdk_blob.a 00:26:28.587 SO libspdk_blob.so.11.0 00:26:28.587 SYMLINK libspdk_blob.so 00:26:28.847 CC lib/blobfs/blobfs.o 00:26:28.847 CC lib/blobfs/tree.o 00:26:28.847 CC lib/lvol/lvol.o 00:26:29.106 LIB libspdk_bdev.a 00:26:29.365 SO libspdk_bdev.so.16.0 00:26:29.365 SYMLINK libspdk_bdev.so 00:26:29.623 CC lib/scsi/dev.o 00:26:29.623 CC lib/scsi/lun.o 00:26:29.623 CC lib/scsi/port.o 00:26:29.623 CC lib/scsi/scsi.o 00:26:29.623 CC lib/nvmf/ctrlr.o 00:26:29.623 CC lib/ftl/ftl_core.o 00:26:29.623 CC lib/nbd/nbd.o 00:26:29.623 CC lib/ublk/ublk.o 00:26:29.880 CC lib/nbd/nbd_rpc.o 00:26:29.880 CC lib/scsi/scsi_bdev.o 00:26:29.880 CC lib/scsi/scsi_pr.o 00:26:29.880 CC lib/nvmf/ctrlr_discovery.o 00:26:29.880 LIB libspdk_blobfs.a 00:26:30.138 SO libspdk_blobfs.so.10.0 00:26:30.138 CC lib/nvmf/ctrlr_bdev.o 00:26:30.138 CC lib/ftl/ftl_init.o 00:26:30.138 LIB libspdk_lvol.a 00:26:30.138 SO libspdk_lvol.so.10.0 00:26:30.138 SYMLINK libspdk_blobfs.so 00:26:30.138 CC lib/ftl/ftl_layout.o 00:26:30.138 LIB libspdk_nbd.a 00:26:30.138 SYMLINK libspdk_lvol.so 00:26:30.138 CC lib/ftl/ftl_debug.o 00:26:30.138 SO libspdk_nbd.so.7.0 00:26:30.397 CC lib/scsi/scsi_rpc.o 00:26:30.397 SYMLINK libspdk_nbd.so 00:26:30.397 CC lib/scsi/task.o 00:26:30.397 CC lib/nvmf/subsystem.o 00:26:30.397 CC lib/nvmf/nvmf.o 00:26:30.397 CC lib/nvmf/nvmf_rpc.o 00:26:30.656 CC lib/ublk/ublk_rpc.o 00:26:30.656 CC lib/nvmf/transport.o 00:26:30.656 CC lib/ftl/ftl_io.o 00:26:30.656 LIB libspdk_scsi.a 00:26:30.656 CC lib/nvmf/tcp.o 00:26:30.656 SO libspdk_scsi.so.9.0 00:26:30.656 LIB libspdk_ublk.a 00:26:30.656 SYMLINK libspdk_scsi.so 00:26:30.915 CC lib/nvmf/stubs.o 00:26:30.915 SO libspdk_ublk.so.3.0 00:26:30.915 SYMLINK libspdk_ublk.so 00:26:30.915 CC lib/nvmf/mdns_server.o 00:26:30.915 CC lib/ftl/ftl_sb.o 00:26:31.174 CC lib/iscsi/conn.o 00:26:31.174 CC lib/ftl/ftl_l2p.o 00:26:31.433 CC lib/ftl/ftl_l2p_flat.o 00:26:31.433 CC lib/nvmf/rdma.o 00:26:31.433 CC lib/iscsi/init_grp.o 00:26:31.692 CC lib/ftl/ftl_nv_cache.o 00:26:31.692 CC lib/nvmf/auth.o 00:26:31.692 CC lib/vhost/vhost.o 00:26:31.692 CC lib/vhost/vhost_rpc.o 00:26:31.692 CC lib/iscsi/iscsi.o 00:26:31.951 CC lib/iscsi/param.o 00:26:31.951 CC lib/ftl/ftl_band.o 00:26:32.210 CC lib/iscsi/portal_grp.o 00:26:32.210 CC lib/ftl/ftl_band_ops.o 00:26:32.210 CC lib/vhost/vhost_scsi.o 00:26:32.468 CC lib/ftl/ftl_writer.o 00:26:32.468 CC lib/ftl/ftl_rq.o 00:26:32.468 CC lib/ftl/ftl_reloc.o 00:26:32.727 CC lib/iscsi/tgt_node.o 00:26:32.727 CC lib/iscsi/iscsi_subsystem.o 00:26:32.727 CC lib/vhost/vhost_blk.o 00:26:32.727 CC lib/ftl/ftl_l2p_cache.o 00:26:32.727 CC lib/ftl/ftl_p2l.o 00:26:33.044 CC lib/vhost/rte_vhost_user.o 00:26:33.044 CC lib/ftl/ftl_p2l_log.o 00:26:33.303 CC lib/ftl/mngt/ftl_mngt.o 00:26:33.303 CC lib/iscsi/iscsi_rpc.o 00:26:33.303 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:33.303 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:33.303 CC lib/iscsi/task.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:33.560 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:33.818 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:33.818 LIB libspdk_iscsi.a 00:26:33.818 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:33.818 SO libspdk_iscsi.so.8.0 00:26:33.818 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:33.818 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:33.818 CC lib/ftl/utils/ftl_conf.o 00:26:34.076 CC lib/ftl/utils/ftl_md.o 00:26:34.076 CC lib/ftl/utils/ftl_mempool.o 00:26:34.076 CC lib/ftl/utils/ftl_bitmap.o 00:26:34.076 SYMLINK libspdk_iscsi.so 00:26:34.076 CC lib/ftl/utils/ftl_property.o 00:26:34.076 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:34.076 LIB libspdk_vhost.a 00:26:34.076 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:34.076 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:34.076 SO libspdk_vhost.so.8.0 00:26:34.335 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:34.335 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:34.335 SYMLINK libspdk_vhost.so 00:26:34.335 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:34.335 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:34.335 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:34.335 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:34.335 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:34.335 LIB libspdk_nvmf.a 00:26:34.335 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:34.595 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:26:34.595 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:26:34.595 CC lib/ftl/base/ftl_base_dev.o 00:26:34.595 CC lib/ftl/base/ftl_base_bdev.o 00:26:34.595 SO libspdk_nvmf.so.19.0 00:26:34.595 CC lib/ftl/ftl_trace.o 00:26:34.853 SYMLINK libspdk_nvmf.so 00:26:34.853 LIB libspdk_ftl.a 00:26:35.111 SO libspdk_ftl.so.9.0 00:26:35.678 SYMLINK libspdk_ftl.so 00:26:35.937 CC module/env_dpdk/env_dpdk_rpc.o 00:26:35.937 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:26:35.937 CC module/keyring/linux/keyring.o 00:26:35.937 CC module/scheduler/gscheduler/gscheduler.o 00:26:35.937 CC module/blob/bdev/blob_bdev.o 00:26:35.937 CC module/accel/error/accel_error.o 00:26:35.937 CC module/fsdev/aio/fsdev_aio.o 00:26:35.937 CC module/scheduler/dynamic/scheduler_dynamic.o 00:26:35.937 CC module/sock/posix/posix.o 00:26:35.937 CC module/keyring/file/keyring.o 00:26:35.937 LIB libspdk_env_dpdk_rpc.a 00:26:35.937 SO libspdk_env_dpdk_rpc.so.6.0 00:26:36.196 SYMLINK libspdk_env_dpdk_rpc.so 00:26:36.196 LIB libspdk_scheduler_dpdk_governor.a 00:26:36.196 CC module/accel/error/accel_error_rpc.o 00:26:36.196 LIB libspdk_scheduler_gscheduler.a 00:26:36.196 CC module/keyring/linux/keyring_rpc.o 00:26:36.196 CC module/keyring/file/keyring_rpc.o 00:26:36.196 SO libspdk_scheduler_gscheduler.so.4.0 00:26:36.196 SO libspdk_scheduler_dpdk_governor.so.4.0 00:26:36.196 CC module/fsdev/aio/fsdev_aio_rpc.o 00:26:36.196 LIB libspdk_scheduler_dynamic.a 00:26:36.196 SYMLINK libspdk_scheduler_gscheduler.so 00:26:36.196 CC module/fsdev/aio/linux_aio_mgr.o 00:26:36.196 LIB libspdk_accel_error.a 00:26:36.196 SO libspdk_scheduler_dynamic.so.4.0 00:26:36.455 LIB libspdk_keyring_file.a 00:26:36.455 SYMLINK libspdk_scheduler_dpdk_governor.so 00:26:36.455 LIB libspdk_keyring_linux.a 00:26:36.455 LIB libspdk_blob_bdev.a 00:26:36.455 SO libspdk_accel_error.so.2.0 00:26:36.455 SO libspdk_keyring_file.so.2.0 00:26:36.455 SO libspdk_blob_bdev.so.11.0 00:26:36.455 SO libspdk_keyring_linux.so.1.0 00:26:36.455 SYMLINK libspdk_scheduler_dynamic.so 00:26:36.455 SYMLINK libspdk_accel_error.so 00:26:36.455 SYMLINK libspdk_keyring_file.so 00:26:36.455 SYMLINK libspdk_blob_bdev.so 00:26:36.455 SYMLINK libspdk_keyring_linux.so 00:26:36.455 CC module/accel/ioat/accel_ioat.o 00:26:36.455 CC module/accel/ioat/accel_ioat_rpc.o 00:26:36.713 CC module/accel/dsa/accel_dsa.o 00:26:36.714 CC module/accel/iaa/accel_iaa.o 00:26:36.714 CC module/accel/iaa/accel_iaa_rpc.o 00:26:36.714 CC module/blobfs/bdev/blobfs_bdev.o 00:26:36.714 CC module/bdev/delay/vbdev_delay.o 00:26:36.714 CC module/bdev/error/vbdev_error.o 00:26:36.714 CC module/bdev/gpt/gpt.o 00:26:36.714 LIB libspdk_accel_ioat.a 00:26:36.714 LIB libspdk_fsdev_aio.a 00:26:36.714 SO libspdk_accel_ioat.so.6.0 00:26:36.972 SO libspdk_fsdev_aio.so.1.0 00:26:36.972 CC module/bdev/error/vbdev_error_rpc.o 00:26:36.972 LIB libspdk_accel_iaa.a 00:26:36.972 SYMLINK libspdk_accel_ioat.so 00:26:36.972 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:36.972 SO libspdk_accel_iaa.so.3.0 00:26:36.972 SYMLINK libspdk_fsdev_aio.so 00:26:36.972 LIB libspdk_sock_posix.a 00:26:36.972 CC module/accel/dsa/accel_dsa_rpc.o 00:26:36.972 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:36.972 SO libspdk_sock_posix.so.6.0 00:26:36.972 SYMLINK libspdk_accel_iaa.so 00:26:36.972 CC module/bdev/gpt/vbdev_gpt.o 00:26:37.230 SYMLINK libspdk_sock_posix.so 00:26:37.230 LIB libspdk_bdev_error.a 00:26:37.230 LIB libspdk_blobfs_bdev.a 00:26:37.230 LIB libspdk_accel_dsa.a 00:26:37.230 SO libspdk_bdev_error.so.6.0 00:26:37.230 SO libspdk_blobfs_bdev.so.6.0 00:26:37.230 CC module/bdev/lvol/vbdev_lvol.o 00:26:37.230 SO libspdk_accel_dsa.so.5.0 00:26:37.230 LIB libspdk_bdev_delay.a 00:26:37.230 SYMLINK libspdk_blobfs_bdev.so 00:26:37.230 SYMLINK libspdk_bdev_error.so 00:26:37.230 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:37.230 CC module/bdev/malloc/bdev_malloc.o 00:26:37.230 SO libspdk_bdev_delay.so.6.0 00:26:37.230 SYMLINK libspdk_accel_dsa.so 00:26:37.230 CC module/bdev/nvme/bdev_nvme.o 00:26:37.488 CC module/bdev/null/bdev_null.o 00:26:37.488 SYMLINK libspdk_bdev_delay.so 00:26:37.488 CC module/bdev/null/bdev_null_rpc.o 00:26:37.488 CC module/bdev/passthru/vbdev_passthru.o 00:26:37.488 LIB libspdk_bdev_gpt.a 00:26:37.488 CC module/bdev/raid/bdev_raid.o 00:26:37.488 SO libspdk_bdev_gpt.so.6.0 00:26:37.488 CC module/bdev/split/vbdev_split.o 00:26:37.488 SYMLINK libspdk_bdev_gpt.so 00:26:37.488 CC module/bdev/split/vbdev_split_rpc.o 00:26:37.787 LIB libspdk_bdev_null.a 00:26:37.787 SO libspdk_bdev_null.so.6.0 00:26:37.787 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:37.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:37.787 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:37.787 SYMLINK libspdk_bdev_null.so 00:26:37.787 CC module/bdev/raid/bdev_raid_rpc.o 00:26:37.787 LIB libspdk_bdev_split.a 00:26:37.787 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:37.787 SO libspdk_bdev_split.so.6.0 00:26:38.046 LIB libspdk_bdev_lvol.a 00:26:38.046 LIB libspdk_bdev_malloc.a 00:26:38.046 SYMLINK libspdk_bdev_split.so 00:26:38.046 SO libspdk_bdev_lvol.so.6.0 00:26:38.046 CC module/bdev/aio/bdev_aio.o 00:26:38.046 SO libspdk_bdev_malloc.so.6.0 00:26:38.046 SYMLINK libspdk_bdev_lvol.so 00:26:38.046 CC module/bdev/raid/bdev_raid_sb.o 00:26:38.046 LIB libspdk_bdev_passthru.a 00:26:38.046 SYMLINK libspdk_bdev_malloc.so 00:26:38.046 CC module/bdev/raid/raid0.o 00:26:38.046 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:38.046 SO libspdk_bdev_passthru.so.6.0 00:26:38.304 CC module/bdev/ftl/bdev_ftl.o 00:26:38.304 SYMLINK libspdk_bdev_passthru.so 00:26:38.304 CC module/bdev/nvme/nvme_rpc.o 00:26:38.304 LIB libspdk_bdev_zone_block.a 00:26:38.304 CC module/bdev/iscsi/bdev_iscsi.o 00:26:38.304 SO libspdk_bdev_zone_block.so.6.0 00:26:38.304 SYMLINK libspdk_bdev_zone_block.so 00:26:38.304 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:26:38.304 CC module/bdev/aio/bdev_aio_rpc.o 00:26:38.304 CC module/bdev/nvme/bdev_mdns_client.o 00:26:38.304 CC module/bdev/raid/raid1.o 00:26:38.562 CC module/bdev/nvme/vbdev_opal.o 00:26:38.562 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:38.562 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:38.562 LIB libspdk_bdev_aio.a 00:26:38.562 SO libspdk_bdev_aio.so.6.0 00:26:38.562 SYMLINK libspdk_bdev_aio.so 00:26:38.562 CC module/bdev/raid/concat.o 00:26:38.821 LIB libspdk_bdev_iscsi.a 00:26:38.821 SO libspdk_bdev_iscsi.so.6.0 00:26:38.821 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:38.821 LIB libspdk_bdev_ftl.a 00:26:38.821 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:38.821 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:38.821 SO libspdk_bdev_ftl.so.6.0 00:26:38.821 CC module/bdev/raid/raid5f.o 00:26:38.821 SYMLINK libspdk_bdev_iscsi.so 00:26:38.821 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:38.821 SYMLINK libspdk_bdev_ftl.so 00:26:39.386 LIB libspdk_bdev_virtio.a 00:26:39.386 LIB libspdk_bdev_raid.a 00:26:39.386 SO libspdk_bdev_virtio.so.6.0 00:26:39.644 SO libspdk_bdev_raid.so.6.0 00:26:39.644 SYMLINK libspdk_bdev_virtio.so 00:26:39.644 SYMLINK libspdk_bdev_raid.so 00:26:40.579 LIB libspdk_bdev_nvme.a 00:26:40.579 SO libspdk_bdev_nvme.so.7.0 00:26:40.837 SYMLINK libspdk_bdev_nvme.so 00:26:41.403 CC module/event/subsystems/keyring/keyring.o 00:26:41.403 CC module/event/subsystems/sock/sock.o 00:26:41.403 CC module/event/subsystems/vmd/vmd.o 00:26:41.403 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:41.403 CC module/event/subsystems/scheduler/scheduler.o 00:26:41.403 CC module/event/subsystems/fsdev/fsdev.o 00:26:41.403 CC module/event/subsystems/iobuf/iobuf.o 00:26:41.403 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:41.403 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:41.403 LIB libspdk_event_vhost_blk.a 00:26:41.403 LIB libspdk_event_keyring.a 00:26:41.403 SO libspdk_event_vhost_blk.so.3.0 00:26:41.403 LIB libspdk_event_sock.a 00:26:41.403 LIB libspdk_event_vmd.a 00:26:41.403 LIB libspdk_event_scheduler.a 00:26:41.403 SO libspdk_event_keyring.so.1.0 00:26:41.403 SO libspdk_event_sock.so.5.0 00:26:41.661 LIB libspdk_event_iobuf.a 00:26:41.661 SO libspdk_event_vmd.so.6.0 00:26:41.661 LIB libspdk_event_fsdev.a 00:26:41.661 SO libspdk_event_scheduler.so.4.0 00:26:41.661 SYMLINK libspdk_event_vhost_blk.so 00:26:41.661 SO libspdk_event_iobuf.so.3.0 00:26:41.661 SO libspdk_event_fsdev.so.1.0 00:26:41.661 SYMLINK libspdk_event_sock.so 00:26:41.661 SYMLINK libspdk_event_keyring.so 00:26:41.661 SYMLINK libspdk_event_vmd.so 00:26:41.661 SYMLINK libspdk_event_scheduler.so 00:26:41.661 SYMLINK libspdk_event_iobuf.so 00:26:41.661 SYMLINK libspdk_event_fsdev.so 00:26:41.919 CC module/event/subsystems/accel/accel.o 00:26:42.178 LIB libspdk_event_accel.a 00:26:42.178 SO libspdk_event_accel.so.6.0 00:26:42.178 SYMLINK libspdk_event_accel.so 00:26:42.811 CC module/event/subsystems/bdev/bdev.o 00:26:42.811 LIB libspdk_event_bdev.a 00:26:42.811 SO libspdk_event_bdev.so.6.0 00:26:42.811 SYMLINK libspdk_event_bdev.so 00:26:43.069 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:43.069 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:43.069 CC module/event/subsystems/ublk/ublk.o 00:26:43.069 CC module/event/subsystems/scsi/scsi.o 00:26:43.069 CC module/event/subsystems/nbd/nbd.o 00:26:43.327 LIB libspdk_event_nbd.a 00:26:43.327 LIB libspdk_event_ublk.a 00:26:43.327 LIB libspdk_event_scsi.a 00:26:43.327 SO libspdk_event_nbd.so.6.0 00:26:43.327 SO libspdk_event_ublk.so.3.0 00:26:43.327 SO libspdk_event_scsi.so.6.0 00:26:43.585 SYMLINK libspdk_event_ublk.so 00:26:43.585 SYMLINK libspdk_event_nbd.so 00:26:43.585 LIB libspdk_event_nvmf.a 00:26:43.585 SYMLINK libspdk_event_scsi.so 00:26:43.585 SO libspdk_event_nvmf.so.6.0 00:26:43.585 SYMLINK libspdk_event_nvmf.so 00:26:43.843 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:43.843 CC module/event/subsystems/iscsi/iscsi.o 00:26:44.101 LIB libspdk_event_iscsi.a 00:26:44.101 LIB libspdk_event_vhost_scsi.a 00:26:44.101 SO libspdk_event_iscsi.so.6.0 00:26:44.101 SO libspdk_event_vhost_scsi.so.3.0 00:26:44.101 SYMLINK libspdk_event_iscsi.so 00:26:44.101 SYMLINK libspdk_event_vhost_scsi.so 00:26:44.359 SO libspdk.so.6.0 00:26:44.359 SYMLINK libspdk.so 00:26:44.617 CC app/spdk_lspci/spdk_lspci.o 00:26:44.617 CC app/spdk_nvme_perf/perf.o 00:26:44.617 CXX app/trace/trace.o 00:26:44.617 CC app/trace_record/trace_record.o 00:26:44.617 CC app/nvmf_tgt/nvmf_main.o 00:26:44.617 CC app/iscsi_tgt/iscsi_tgt.o 00:26:44.617 CC app/spdk_tgt/spdk_tgt.o 00:26:44.875 CC test/thread/poller_perf/poller_perf.o 00:26:44.875 CC examples/util/zipf/zipf.o 00:26:44.875 CC test/dma/test_dma/test_dma.o 00:26:44.875 LINK spdk_lspci 00:26:44.875 LINK spdk_tgt 00:26:44.875 LINK nvmf_tgt 00:26:44.875 LINK iscsi_tgt 00:26:44.875 LINK poller_perf 00:26:44.875 LINK zipf 00:26:45.133 LINK spdk_trace_record 00:26:45.133 LINK spdk_trace 00:26:45.133 CC app/spdk_nvme_identify/identify.o 00:26:45.133 CC app/spdk_nvme_discover/discovery_aer.o 00:26:45.392 CC app/spdk_top/spdk_top.o 00:26:45.392 CC examples/ioat/perf/perf.o 00:26:45.392 CC app/spdk_dd/spdk_dd.o 00:26:45.392 CC test/app/bdev_svc/bdev_svc.o 00:26:45.392 CC app/fio/nvme/fio_plugin.o 00:26:45.392 LINK test_dma 00:26:45.392 CC app/vhost/vhost.o 00:26:45.392 LINK spdk_nvme_discover 00:26:45.650 LINK ioat_perf 00:26:45.650 LINK bdev_svc 00:26:45.650 LINK vhost 00:26:45.650 LINK spdk_nvme_perf 00:26:45.908 LINK spdk_dd 00:26:45.908 CC examples/ioat/verify/verify.o 00:26:45.908 TEST_HEADER include/spdk/accel.h 00:26:45.908 TEST_HEADER include/spdk/accel_module.h 00:26:45.908 TEST_HEADER include/spdk/assert.h 00:26:45.908 TEST_HEADER include/spdk/barrier.h 00:26:45.908 TEST_HEADER include/spdk/base64.h 00:26:45.908 TEST_HEADER include/spdk/bdev.h 00:26:45.908 TEST_HEADER include/spdk/bdev_module.h 00:26:45.908 TEST_HEADER include/spdk/bdev_zone.h 00:26:45.908 TEST_HEADER include/spdk/bit_array.h 00:26:45.909 CC app/fio/bdev/fio_plugin.o 00:26:45.909 TEST_HEADER include/spdk/bit_pool.h 00:26:45.909 TEST_HEADER include/spdk/blob_bdev.h 00:26:45.909 TEST_HEADER include/spdk/blobfs_bdev.h 00:26:45.909 TEST_HEADER include/spdk/blobfs.h 00:26:45.909 TEST_HEADER include/spdk/blob.h 00:26:45.909 TEST_HEADER include/spdk/conf.h 00:26:45.909 TEST_HEADER include/spdk/config.h 00:26:45.909 TEST_HEADER include/spdk/cpuset.h 00:26:45.909 TEST_HEADER include/spdk/crc16.h 00:26:45.909 TEST_HEADER include/spdk/crc32.h 00:26:45.909 TEST_HEADER include/spdk/crc64.h 00:26:45.909 TEST_HEADER include/spdk/dif.h 00:26:45.909 TEST_HEADER include/spdk/dma.h 00:26:45.909 TEST_HEADER include/spdk/endian.h 00:26:45.909 TEST_HEADER include/spdk/env_dpdk.h 00:26:45.909 TEST_HEADER include/spdk/env.h 00:26:45.909 TEST_HEADER include/spdk/event.h 00:26:45.909 TEST_HEADER include/spdk/fd_group.h 00:26:45.909 TEST_HEADER include/spdk/fd.h 00:26:45.909 TEST_HEADER include/spdk/file.h 00:26:45.909 TEST_HEADER include/spdk/fsdev.h 00:26:45.909 TEST_HEADER include/spdk/fsdev_module.h 00:26:45.909 TEST_HEADER include/spdk/ftl.h 00:26:45.909 TEST_HEADER include/spdk/fuse_dispatcher.h 00:26:45.909 TEST_HEADER include/spdk/gpt_spec.h 00:26:45.909 TEST_HEADER include/spdk/hexlify.h 00:26:45.909 TEST_HEADER include/spdk/histogram_data.h 00:26:45.909 TEST_HEADER include/spdk/idxd.h 00:26:45.909 TEST_HEADER include/spdk/idxd_spec.h 00:26:45.909 TEST_HEADER include/spdk/init.h 00:26:45.909 TEST_HEADER include/spdk/ioat.h 00:26:45.909 TEST_HEADER include/spdk/ioat_spec.h 00:26:45.909 TEST_HEADER include/spdk/iscsi_spec.h 00:26:45.909 TEST_HEADER include/spdk/json.h 00:26:45.909 TEST_HEADER include/spdk/jsonrpc.h 00:26:45.909 TEST_HEADER include/spdk/keyring.h 00:26:45.909 TEST_HEADER include/spdk/keyring_module.h 00:26:45.909 TEST_HEADER include/spdk/likely.h 00:26:45.909 TEST_HEADER include/spdk/log.h 00:26:45.909 TEST_HEADER include/spdk/lvol.h 00:26:45.909 TEST_HEADER include/spdk/md5.h 00:26:45.909 TEST_HEADER include/spdk/memory.h 00:26:45.909 TEST_HEADER include/spdk/mmio.h 00:26:45.909 TEST_HEADER include/spdk/nbd.h 00:26:45.909 TEST_HEADER include/spdk/net.h 00:26:45.909 TEST_HEADER include/spdk/notify.h 00:26:45.909 TEST_HEADER include/spdk/nvme.h 00:26:46.167 TEST_HEADER include/spdk/nvme_intel.h 00:26:46.167 TEST_HEADER include/spdk/nvme_ocssd.h 00:26:46.167 CC test/app/histogram_perf/histogram_perf.o 00:26:46.167 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:26:46.167 TEST_HEADER include/spdk/nvme_spec.h 00:26:46.167 TEST_HEADER include/spdk/nvme_zns.h 00:26:46.167 TEST_HEADER include/spdk/nvmf_cmd.h 00:26:46.167 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:26:46.167 TEST_HEADER include/spdk/nvmf.h 00:26:46.167 TEST_HEADER include/spdk/nvmf_spec.h 00:26:46.167 TEST_HEADER include/spdk/nvmf_transport.h 00:26:46.167 TEST_HEADER include/spdk/opal.h 00:26:46.167 TEST_HEADER include/spdk/opal_spec.h 00:26:46.167 TEST_HEADER include/spdk/pci_ids.h 00:26:46.167 TEST_HEADER include/spdk/pipe.h 00:26:46.167 TEST_HEADER include/spdk/queue.h 00:26:46.167 TEST_HEADER include/spdk/reduce.h 00:26:46.167 TEST_HEADER include/spdk/rpc.h 00:26:46.167 TEST_HEADER include/spdk/scheduler.h 00:26:46.167 TEST_HEADER include/spdk/scsi.h 00:26:46.167 TEST_HEADER include/spdk/scsi_spec.h 00:26:46.167 TEST_HEADER include/spdk/sock.h 00:26:46.167 TEST_HEADER include/spdk/stdinc.h 00:26:46.167 TEST_HEADER include/spdk/string.h 00:26:46.167 TEST_HEADER include/spdk/thread.h 00:26:46.167 TEST_HEADER include/spdk/trace.h 00:26:46.167 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:46.167 TEST_HEADER include/spdk/trace_parser.h 00:26:46.167 TEST_HEADER include/spdk/tree.h 00:26:46.167 TEST_HEADER include/spdk/ublk.h 00:26:46.167 TEST_HEADER include/spdk/util.h 00:26:46.167 CC test/app/jsoncat/jsoncat.o 00:26:46.167 TEST_HEADER include/spdk/uuid.h 00:26:46.167 TEST_HEADER include/spdk/version.h 00:26:46.167 TEST_HEADER include/spdk/vfio_user_pci.h 00:26:46.167 TEST_HEADER include/spdk/vfio_user_spec.h 00:26:46.167 TEST_HEADER include/spdk/vhost.h 00:26:46.167 TEST_HEADER include/spdk/vmd.h 00:26:46.168 TEST_HEADER include/spdk/xor.h 00:26:46.168 TEST_HEADER include/spdk/zipf.h 00:26:46.168 CXX test/cpp_headers/accel.o 00:26:46.168 LINK verify 00:26:46.168 LINK spdk_nvme_identify 00:26:46.426 LINK histogram_perf 00:26:46.426 CC examples/vmd/lsvmd/lsvmd.o 00:26:46.426 LINK jsoncat 00:26:46.426 LINK spdk_nvme 00:26:46.426 CXX test/cpp_headers/accel_module.o 00:26:46.426 CXX test/cpp_headers/assert.o 00:26:46.426 LINK spdk_top 00:26:46.426 LINK lsvmd 00:26:46.684 LINK spdk_bdev 00:26:46.684 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:46.684 CXX test/cpp_headers/barrier.o 00:26:46.684 CXX test/cpp_headers/base64.o 00:26:46.684 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:46.684 CC examples/idxd/perf/perf.o 00:26:46.684 LINK nvme_fuzz 00:26:46.942 CC examples/vmd/led/led.o 00:26:46.942 CXX test/cpp_headers/bdev.o 00:26:46.942 CC examples/thread/thread/thread_ex.o 00:26:46.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:46.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:46.942 CC test/app/stub/stub.o 00:26:46.942 LINK interrupt_tgt 00:26:46.942 CC test/env/mem_callbacks/mem_callbacks.o 00:26:46.942 LINK led 00:26:47.200 CXX test/cpp_headers/bdev_module.o 00:26:47.200 LINK idxd_perf 00:26:47.200 LINK thread 00:26:47.200 CC test/event/event_perf/event_perf.o 00:26:47.200 LINK stub 00:26:47.200 CC test/event/reactor/reactor.o 00:26:47.200 CC test/env/vtophys/vtophys.o 00:26:47.200 CXX test/cpp_headers/bdev_zone.o 00:26:47.457 LINK vhost_fuzz 00:26:47.457 LINK reactor 00:26:47.457 LINK vtophys 00:26:47.457 LINK event_perf 00:26:47.457 CC test/nvme/aer/aer.o 00:26:47.457 CXX test/cpp_headers/bit_array.o 00:26:47.457 CC test/nvme/reset/reset.o 00:26:47.716 CXX test/cpp_headers/bit_pool.o 00:26:47.716 CC examples/sock/hello_world/hello_sock.o 00:26:47.716 LINK mem_callbacks 00:26:47.716 CC test/event/reactor_perf/reactor_perf.o 00:26:47.716 CC test/event/app_repeat/app_repeat.o 00:26:47.976 CC test/event/scheduler/scheduler.o 00:26:47.976 CC test/rpc_client/rpc_client_test.o 00:26:47.976 CXX test/cpp_headers/blob_bdev.o 00:26:47.976 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:26:47.976 LINK aer 00:26:47.976 LINK hello_sock 00:26:47.976 LINK reactor_perf 00:26:47.976 LINK app_repeat 00:26:47.976 LINK reset 00:26:48.234 LINK scheduler 00:26:48.234 LINK env_dpdk_post_init 00:26:48.234 LINK rpc_client_test 00:26:48.234 CXX test/cpp_headers/blobfs_bdev.o 00:26:48.234 CC test/env/memory/memory_ut.o 00:26:48.234 CC test/nvme/sgl/sgl.o 00:26:48.234 CXX test/cpp_headers/blobfs.o 00:26:48.492 CC test/env/pci/pci_ut.o 00:26:48.492 CC examples/fsdev/hello_world/hello_fsdev.o 00:26:48.492 CXX test/cpp_headers/blob.o 00:26:48.750 CC examples/accel/perf/accel_perf.o 00:26:48.750 LINK sgl 00:26:48.750 CC test/accel/dif/dif.o 00:26:48.750 CC examples/blob/hello_world/hello_blob.o 00:26:48.750 CC test/blobfs/mkfs/mkfs.o 00:26:48.750 LINK hello_fsdev 00:26:48.750 CXX test/cpp_headers/conf.o 00:26:49.007 LINK pci_ut 00:26:49.007 CC test/nvme/e2edp/nvme_dp.o 00:26:49.007 LINK mkfs 00:26:49.007 LINK hello_blob 00:26:49.007 CXX test/cpp_headers/config.o 00:26:49.007 CXX test/cpp_headers/cpuset.o 00:26:49.264 CC test/nvme/overhead/overhead.o 00:26:49.264 CXX test/cpp_headers/crc16.o 00:26:49.264 CC examples/blob/cli/blobcli.o 00:26:49.264 LINK nvme_dp 00:26:49.521 CC examples/nvme/hello_world/hello_world.o 00:26:49.521 LINK accel_perf 00:26:49.521 CXX test/cpp_headers/crc32.o 00:26:49.521 CC test/lvol/esnap/esnap.o 00:26:49.521 LINK iscsi_fuzz 00:26:49.521 LINK overhead 00:26:49.521 LINK memory_ut 00:26:49.778 CC test/nvme/err_injection/err_injection.o 00:26:49.778 CXX test/cpp_headers/crc64.o 00:26:49.778 LINK hello_world 00:26:49.778 LINK dif 00:26:49.778 CC examples/nvme/reconnect/reconnect.o 00:26:49.778 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:50.037 LINK blobcli 00:26:50.037 CXX test/cpp_headers/dif.o 00:26:50.037 CC examples/nvme/arbitration/arbitration.o 00:26:50.037 CC examples/nvme/hotplug/hotplug.o 00:26:50.037 LINK err_injection 00:26:50.037 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:50.037 CC examples/nvme/abort/abort.o 00:26:50.037 CXX test/cpp_headers/dma.o 00:26:50.295 LINK reconnect 00:26:50.295 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:50.295 LINK hotplug 00:26:50.295 LINK cmb_copy 00:26:50.295 CXX test/cpp_headers/endian.o 00:26:50.295 CC test/nvme/startup/startup.o 00:26:50.553 LINK arbitration 00:26:50.553 CXX test/cpp_headers/env_dpdk.o 00:26:50.553 LINK nvme_manage 00:26:50.553 LINK pmr_persistence 00:26:50.553 LINK startup 00:26:50.553 LINK abort 00:26:50.553 CC test/nvme/reserve/reserve.o 00:26:50.553 CXX test/cpp_headers/env.o 00:26:50.811 CC examples/bdev/hello_world/hello_bdev.o 00:26:50.811 CC test/nvme/simple_copy/simple_copy.o 00:26:50.811 CC test/bdev/bdevio/bdevio.o 00:26:50.811 CXX test/cpp_headers/event.o 00:26:50.811 CC test/nvme/boot_partition/boot_partition.o 00:26:50.811 CC test/nvme/connect_stress/connect_stress.o 00:26:50.811 CC test/nvme/compliance/nvme_compliance.o 00:26:51.069 CC examples/bdev/bdevperf/bdevperf.o 00:26:51.069 LINK hello_bdev 00:26:51.069 LINK boot_partition 00:26:51.069 CXX test/cpp_headers/fd_group.o 00:26:51.069 LINK reserve 00:26:51.069 LINK simple_copy 00:26:51.069 LINK connect_stress 00:26:51.326 CXX test/cpp_headers/fd.o 00:26:51.326 CXX test/cpp_headers/file.o 00:26:51.326 CXX test/cpp_headers/fsdev.o 00:26:51.326 LINK bdevio 00:26:51.326 LINK nvme_compliance 00:26:51.326 CXX test/cpp_headers/fsdev_module.o 00:26:51.326 CXX test/cpp_headers/ftl.o 00:26:51.326 CXX test/cpp_headers/fuse_dispatcher.o 00:26:51.583 CXX test/cpp_headers/gpt_spec.o 00:26:51.583 CXX test/cpp_headers/hexlify.o 00:26:51.583 CC test/nvme/fused_ordering/fused_ordering.o 00:26:51.583 CC test/nvme/doorbell_aers/doorbell_aers.o 00:26:51.583 CXX test/cpp_headers/histogram_data.o 00:26:51.583 CXX test/cpp_headers/idxd.o 00:26:51.583 CXX test/cpp_headers/idxd_spec.o 00:26:51.841 CXX test/cpp_headers/init.o 00:26:51.841 CC test/nvme/fdp/fdp.o 00:26:51.841 CC test/nvme/cuse/cuse.o 00:26:51.841 CXX test/cpp_headers/ioat.o 00:26:51.841 CXX test/cpp_headers/ioat_spec.o 00:26:51.841 LINK fused_ordering 00:26:51.841 LINK doorbell_aers 00:26:51.841 CXX test/cpp_headers/iscsi_spec.o 00:26:52.099 LINK bdevperf 00:26:52.099 CXX test/cpp_headers/json.o 00:26:52.099 CXX test/cpp_headers/jsonrpc.o 00:26:52.099 CXX test/cpp_headers/keyring.o 00:26:52.099 CXX test/cpp_headers/keyring_module.o 00:26:52.099 CXX test/cpp_headers/likely.o 00:26:52.099 CXX test/cpp_headers/log.o 00:26:52.357 CXX test/cpp_headers/lvol.o 00:26:52.357 CXX test/cpp_headers/md5.o 00:26:52.357 CXX test/cpp_headers/memory.o 00:26:52.357 CXX test/cpp_headers/mmio.o 00:26:52.357 CXX test/cpp_headers/nbd.o 00:26:52.357 CXX test/cpp_headers/net.o 00:26:52.357 CXX test/cpp_headers/notify.o 00:26:52.357 LINK fdp 00:26:52.357 CXX test/cpp_headers/nvme.o 00:26:52.357 CXX test/cpp_headers/nvme_intel.o 00:26:52.615 CXX test/cpp_headers/nvme_ocssd.o 00:26:52.615 CC examples/nvmf/nvmf/nvmf.o 00:26:52.615 CXX test/cpp_headers/nvme_ocssd_spec.o 00:26:52.615 CXX test/cpp_headers/nvme_spec.o 00:26:52.615 CXX test/cpp_headers/nvme_zns.o 00:26:52.615 CXX test/cpp_headers/nvmf_cmd.o 00:26:52.615 CXX test/cpp_headers/nvmf_fc_spec.o 00:26:52.615 CXX test/cpp_headers/nvmf.o 00:26:52.615 CXX test/cpp_headers/nvmf_spec.o 00:26:52.873 CXX test/cpp_headers/nvmf_transport.o 00:26:52.873 CXX test/cpp_headers/opal.o 00:26:52.873 CXX test/cpp_headers/opal_spec.o 00:26:52.873 CXX test/cpp_headers/pci_ids.o 00:26:52.873 CXX test/cpp_headers/pipe.o 00:26:52.873 LINK nvmf 00:26:52.873 CXX test/cpp_headers/queue.o 00:26:52.873 CXX test/cpp_headers/reduce.o 00:26:52.873 CXX test/cpp_headers/rpc.o 00:26:53.132 CXX test/cpp_headers/scheduler.o 00:26:53.132 CXX test/cpp_headers/scsi.o 00:26:53.132 CXX test/cpp_headers/scsi_spec.o 00:26:53.132 CXX test/cpp_headers/sock.o 00:26:53.132 CXX test/cpp_headers/stdinc.o 00:26:53.132 CXX test/cpp_headers/string.o 00:26:53.132 CXX test/cpp_headers/thread.o 00:26:53.132 CXX test/cpp_headers/trace.o 00:26:53.132 CXX test/cpp_headers/trace_parser.o 00:26:53.132 CXX test/cpp_headers/tree.o 00:26:53.390 CXX test/cpp_headers/ublk.o 00:26:53.390 CXX test/cpp_headers/util.o 00:26:53.390 CXX test/cpp_headers/uuid.o 00:26:53.390 CXX test/cpp_headers/version.o 00:26:53.390 CXX test/cpp_headers/vfio_user_pci.o 00:26:53.390 CXX test/cpp_headers/vfio_user_spec.o 00:26:53.390 CXX test/cpp_headers/vhost.o 00:26:53.390 CXX test/cpp_headers/vmd.o 00:26:53.390 CXX test/cpp_headers/xor.o 00:26:53.390 CXX test/cpp_headers/zipf.o 00:26:53.649 LINK cuse 00:26:57.858 LINK esnap 00:26:57.858 00:26:57.858 real 1m28.669s 00:26:57.858 user 6m46.503s 00:26:57.858 sys 1m29.519s 00:26:57.858 13:56:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:26:57.858 13:56:04 make -- common/autotest_common.sh@10 -- $ set +x 00:26:57.858 ************************************ 00:26:57.858 END TEST make 00:26:57.858 ************************************ 00:26:57.858 13:56:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:26:57.858 13:56:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:57.858 13:56:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:57.858 13:56:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:57.858 13:56:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:57.858 13:56:04 -- pm/common@44 -- $ pid=6041 00:26:57.858 13:56:04 -- pm/common@50 -- $ kill -TERM 6041 00:26:57.858 13:56:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:57.858 13:56:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:57.858 13:56:04 -- pm/common@44 -- $ pid=6042 00:26:57.858 13:56:04 -- pm/common@50 -- $ kill -TERM 6042 00:26:57.858 13:56:04 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:57.858 13:56:04 -- common/autotest_common.sh@1681 -- # lcov --version 00:26:57.858 13:56:04 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:58.117 13:56:04 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:58.117 13:56:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.117 13:56:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.117 13:56:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.117 13:56:04 -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.117 13:56:04 -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.117 13:56:04 -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.117 13:56:04 -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.117 13:56:04 -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.117 13:56:04 -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.117 13:56:04 -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.117 13:56:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.117 13:56:04 -- scripts/common.sh@344 -- # case "$op" in 00:26:58.117 13:56:04 -- scripts/common.sh@345 -- # : 1 00:26:58.117 13:56:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.117 13:56:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.117 13:56:04 -- scripts/common.sh@365 -- # decimal 1 00:26:58.117 13:56:04 -- scripts/common.sh@353 -- # local d=1 00:26:58.117 13:56:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.117 13:56:04 -- scripts/common.sh@355 -- # echo 1 00:26:58.117 13:56:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.117 13:56:04 -- scripts/common.sh@366 -- # decimal 2 00:26:58.117 13:56:04 -- scripts/common.sh@353 -- # local d=2 00:26:58.117 13:56:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.117 13:56:04 -- scripts/common.sh@355 -- # echo 2 00:26:58.117 13:56:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.117 13:56:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.117 13:56:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.117 13:56:04 -- scripts/common.sh@368 -- # return 0 00:26:58.117 13:56:04 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.117 13:56:04 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:58.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.117 --rc genhtml_branch_coverage=1 00:26:58.117 --rc genhtml_function_coverage=1 00:26:58.117 --rc genhtml_legend=1 00:26:58.117 --rc geninfo_all_blocks=1 00:26:58.117 --rc geninfo_unexecuted_blocks=1 00:26:58.117 00:26:58.117 ' 00:26:58.117 13:56:04 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:58.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.117 --rc genhtml_branch_coverage=1 00:26:58.117 --rc genhtml_function_coverage=1 00:26:58.117 --rc genhtml_legend=1 00:26:58.117 --rc geninfo_all_blocks=1 00:26:58.117 --rc geninfo_unexecuted_blocks=1 00:26:58.117 00:26:58.117 ' 00:26:58.117 13:56:04 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:58.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.117 --rc genhtml_branch_coverage=1 00:26:58.117 --rc genhtml_function_coverage=1 00:26:58.117 --rc genhtml_legend=1 00:26:58.117 --rc geninfo_all_blocks=1 00:26:58.117 --rc geninfo_unexecuted_blocks=1 00:26:58.117 00:26:58.117 ' 00:26:58.117 13:56:04 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:58.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.117 --rc genhtml_branch_coverage=1 00:26:58.117 --rc genhtml_function_coverage=1 00:26:58.117 --rc genhtml_legend=1 00:26:58.117 --rc geninfo_all_blocks=1 00:26:58.117 --rc geninfo_unexecuted_blocks=1 00:26:58.117 00:26:58.117 ' 00:26:58.117 13:56:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:58.117 13:56:04 -- nvmf/common.sh@7 -- # uname -s 00:26:58.117 13:56:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.117 13:56:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.117 13:56:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.117 13:56:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.117 13:56:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.118 13:56:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.118 13:56:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.118 13:56:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.118 13:56:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.118 13:56:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.118 13:56:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e35b0848-66bf-4384-b956-5a01a608691e 00:26:58.118 13:56:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=e35b0848-66bf-4384-b956-5a01a608691e 00:26:58.118 13:56:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.118 13:56:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.118 13:56:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:58.118 13:56:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.118 13:56:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:58.118 13:56:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.118 13:56:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.118 13:56:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.118 13:56:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.118 13:56:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.118 13:56:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.118 13:56:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.118 13:56:04 -- paths/export.sh@5 -- # export PATH 00:26:58.118 13:56:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.118 13:56:04 -- nvmf/common.sh@51 -- # : 0 00:26:58.118 13:56:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.118 13:56:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.118 13:56:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.118 13:56:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.118 13:56:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.118 13:56:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.118 13:56:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.118 13:56:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.118 13:56:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.118 13:56:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:26:58.118 13:56:04 -- spdk/autotest.sh@32 -- # uname -s 00:26:58.118 13:56:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:26:58.118 13:56:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:26:58.118 13:56:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:58.118 13:56:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:26:58.118 13:56:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:26:58.118 13:56:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:26:58.118 13:56:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:26:58.118 13:56:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:26:58.118 13:56:04 -- spdk/autotest.sh@48 -- # udevadm_pid=66895 00:26:58.118 13:56:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:26:58.118 13:56:04 -- pm/common@17 -- # local monitor 00:26:58.118 13:56:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:58.118 13:56:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:26:58.118 13:56:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:58.118 13:56:04 -- pm/common@21 -- # date +%s 00:26:58.118 13:56:04 -- pm/common@21 -- # date +%s 00:26:58.118 13:56:04 -- pm/common@25 -- # sleep 1 00:26:58.118 13:56:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728482164 00:26:58.118 13:56:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728482164 00:26:58.118 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728482164_collect-cpu-load.pm.log 00:26:58.118 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728482164_collect-vmstat.pm.log 00:26:59.051 13:56:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:26:59.051 13:56:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:26:59.051 13:56:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:59.051 13:56:05 -- common/autotest_common.sh@10 -- # set +x 00:26:59.051 13:56:05 -- spdk/autotest.sh@59 -- # create_test_list 00:26:59.051 13:56:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:26:59.051 13:56:05 -- common/autotest_common.sh@10 -- # set +x 00:26:59.051 13:56:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:26:59.051 13:56:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:26:59.310 13:56:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:26:59.310 13:56:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:26:59.310 13:56:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:26:59.310 13:56:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:26:59.310 13:56:05 -- common/autotest_common.sh@1455 -- # uname 00:26:59.310 13:56:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:26:59.310 13:56:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:26:59.310 13:56:05 -- common/autotest_common.sh@1475 -- # uname 00:26:59.310 13:56:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:26:59.310 13:56:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:26:59.310 13:56:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:26:59.310 lcov: LCOV version 1.15 00:26:59.310 13:56:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:27:17.390 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:27:17.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:27:35.492 13:56:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:27:35.492 13:56:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.492 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:27:35.492 13:56:39 -- spdk/autotest.sh@78 -- # rm -f 00:27:35.492 13:56:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:35.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:35.492 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:35.492 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:35.492 13:56:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:27:35.492 13:56:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:27:35.492 13:56:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:27:35.492 13:56:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:27:35.492 13:56:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:35.492 13:56:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:27:35.492 13:56:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:35.492 13:56:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:35.492 13:56:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:27:35.492 13:56:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:35.492 13:56:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:35.492 13:56:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:27:35.492 13:56:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:27:35.492 13:56:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:35.492 13:56:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:27:35.492 13:56:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:27:35.492 13:56:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:27:35.492 13:56:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:35.492 13:56:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:27:35.492 13:56:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:35.492 13:56:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:35.492 13:56:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:27:35.492 13:56:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:27:35.492 13:56:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:27:35.492 No valid GPT data, bailing 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # pt= 00:27:35.492 13:56:40 -- scripts/common.sh@395 -- # return 1 00:27:35.492 13:56:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:27:35.492 1+0 records in 00:27:35.492 1+0 records out 00:27:35.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595655 s, 176 MB/s 00:27:35.492 13:56:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:35.492 13:56:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:35.492 13:56:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:27:35.492 13:56:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:27:35.492 13:56:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:27:35.492 No valid GPT data, bailing 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # pt= 00:27:35.492 13:56:40 -- scripts/common.sh@395 -- # return 1 00:27:35.492 13:56:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:27:35.492 1+0 records in 00:27:35.492 1+0 records out 00:27:35.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586893 s, 179 MB/s 00:27:35.492 13:56:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:35.492 13:56:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:35.492 13:56:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:27:35.492 13:56:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:27:35.492 13:56:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:27:35.492 No valid GPT data, bailing 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:35.492 13:56:40 -- scripts/common.sh@394 -- # pt= 00:27:35.492 13:56:40 -- scripts/common.sh@395 -- # return 1 00:27:35.492 13:56:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:27:35.492 1+0 records in 00:27:35.492 1+0 records out 00:27:35.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572983 s, 183 MB/s 00:27:35.492 13:56:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:27:35.492 13:56:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:27:35.492 13:56:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:27:35.492 13:56:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:27:35.492 13:56:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:27:35.492 No valid GPT data, bailing 00:27:35.492 13:56:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:35.492 13:56:41 -- scripts/common.sh@394 -- # pt= 00:27:35.492 13:56:41 -- scripts/common.sh@395 -- # return 1 00:27:35.492 13:56:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:27:35.492 1+0 records in 00:27:35.492 1+0 records out 00:27:35.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557617 s, 188 MB/s 00:27:35.492 13:56:41 -- spdk/autotest.sh@105 -- # sync 00:27:35.492 13:56:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:27:35.492 13:56:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:27:35.492 13:56:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:27:36.867 13:56:43 -- spdk/autotest.sh@111 -- # uname -s 00:27:36.867 13:56:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:27:36.867 13:56:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:27:36.867 13:56:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:37.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:37.803 Hugepages 00:27:37.803 node hugesize free / total 00:27:37.803 node0 1048576kB 0 / 0 00:27:37.803 node0 2048kB 0 / 0 00:27:37.803 00:27:37.803 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:37.803 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:37.803 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:37.803 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:27:37.803 13:56:44 -- spdk/autotest.sh@117 -- # uname -s 00:27:37.803 13:56:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:27:37.803 13:56:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:27:37.803 13:56:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:38.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:38.737 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:38.737 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:38.737 13:56:45 -- common/autotest_common.sh@1515 -- # sleep 1 00:27:40.109 13:56:46 -- common/autotest_common.sh@1516 -- # bdfs=() 00:27:40.109 13:56:46 -- common/autotest_common.sh@1516 -- # local bdfs 00:27:40.109 13:56:46 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:27:40.109 13:56:46 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:27:40.109 13:56:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:40.109 13:56:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:40.109 13:56:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:40.109 13:56:46 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:40.109 13:56:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:40.109 13:56:46 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:27:40.109 13:56:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:40.109 13:56:46 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:40.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:40.367 Waiting for block devices as requested 00:27:40.367 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:40.367 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:40.624 13:56:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:40.624 13:56:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:27:40.624 13:56:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:27:40.624 13:56:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:27:40.624 13:56:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:40.624 13:56:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:27:40.624 13:56:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:27:40.624 13:56:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:27:40.625 13:56:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:27:40.625 13:56:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:40.625 13:56:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:40.625 13:56:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:40.625 13:56:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1541 -- # continue 00:27:40.625 13:56:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:27:40.625 13:56:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:27:40.625 13:56:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:27:40.625 13:56:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:27:40.625 13:56:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:27:40.625 13:56:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:27:40.625 13:56:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:27:40.625 13:56:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:27:40.625 13:56:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:27:40.625 13:56:47 -- common/autotest_common.sh@1541 -- # continue 00:27:40.625 13:56:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:27:40.625 13:56:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.625 13:56:47 -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 13:56:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:27:40.625 13:56:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.625 13:56:47 -- common/autotest_common.sh@10 -- # set +x 00:27:40.625 13:56:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:41.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:41.574 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:41.574 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:41.574 13:56:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:27:41.574 13:56:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.574 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.835 13:56:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:27:41.835 13:56:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:27:41.835 13:56:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:27:41.835 13:56:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:27:41.835 13:56:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:27:41.835 13:56:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:27:41.835 13:56:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:27:41.835 13:56:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:27:41.835 13:56:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:27:41.835 13:56:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:27:41.835 13:56:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:41.835 13:56:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:41.835 13:56:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:27:41.835 13:56:48 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:27:41.835 13:56:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:41.835 13:56:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:41.835 13:56:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:27:41.835 13:56:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:41.835 13:56:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:41.835 13:56:48 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:27:41.835 13:56:48 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:27:41.835 13:56:48 -- common/autotest_common.sh@1564 -- # device=0x0010 00:27:41.835 13:56:48 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:27:41.835 13:56:48 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:27:41.835 13:56:48 -- common/autotest_common.sh@1570 -- # return 0 00:27:41.835 13:56:48 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:27:41.835 13:56:48 -- common/autotest_common.sh@1578 -- # return 0 00:27:41.835 13:56:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:27:41.835 13:56:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:27:41.835 13:56:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:41.835 13:56:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:27:41.835 13:56:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:27:41.835 13:56:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.835 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.835 13:56:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:27:41.835 13:56:48 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:41.835 13:56:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:41.835 13:56:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:41.835 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.835 ************************************ 00:27:41.835 START TEST env 00:27:41.835 ************************************ 00:27:41.835 13:56:48 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:27:41.835 * Looking for test storage... 00:27:41.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:27:41.835 13:56:48 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:41.835 13:56:48 env -- common/autotest_common.sh@1681 -- # lcov --version 00:27:41.835 13:56:48 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:42.094 13:56:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.094 13:56:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.094 13:56:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.094 13:56:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.094 13:56:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.094 13:56:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.094 13:56:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.094 13:56:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.094 13:56:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.094 13:56:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.094 13:56:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.094 13:56:48 env -- scripts/common.sh@344 -- # case "$op" in 00:27:42.094 13:56:48 env -- scripts/common.sh@345 -- # : 1 00:27:42.094 13:56:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.094 13:56:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.094 13:56:48 env -- scripts/common.sh@365 -- # decimal 1 00:27:42.094 13:56:48 env -- scripts/common.sh@353 -- # local d=1 00:27:42.094 13:56:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.094 13:56:48 env -- scripts/common.sh@355 -- # echo 1 00:27:42.094 13:56:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.094 13:56:48 env -- scripts/common.sh@366 -- # decimal 2 00:27:42.094 13:56:48 env -- scripts/common.sh@353 -- # local d=2 00:27:42.094 13:56:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.094 13:56:48 env -- scripts/common.sh@355 -- # echo 2 00:27:42.094 13:56:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.094 13:56:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.094 13:56:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.094 13:56:48 env -- scripts/common.sh@368 -- # return 0 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:42.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.094 --rc genhtml_branch_coverage=1 00:27:42.094 --rc genhtml_function_coverage=1 00:27:42.094 --rc genhtml_legend=1 00:27:42.094 --rc geninfo_all_blocks=1 00:27:42.094 --rc geninfo_unexecuted_blocks=1 00:27:42.094 00:27:42.094 ' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:42.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.094 --rc genhtml_branch_coverage=1 00:27:42.094 --rc genhtml_function_coverage=1 00:27:42.094 --rc genhtml_legend=1 00:27:42.094 --rc geninfo_all_blocks=1 00:27:42.094 --rc geninfo_unexecuted_blocks=1 00:27:42.094 00:27:42.094 ' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:42.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.094 --rc genhtml_branch_coverage=1 00:27:42.094 --rc genhtml_function_coverage=1 00:27:42.094 --rc genhtml_legend=1 00:27:42.094 --rc geninfo_all_blocks=1 00:27:42.094 --rc geninfo_unexecuted_blocks=1 00:27:42.094 00:27:42.094 ' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:42.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.094 --rc genhtml_branch_coverage=1 00:27:42.094 --rc genhtml_function_coverage=1 00:27:42.094 --rc genhtml_legend=1 00:27:42.094 --rc geninfo_all_blocks=1 00:27:42.094 --rc geninfo_unexecuted_blocks=1 00:27:42.094 00:27:42.094 ' 00:27:42.094 13:56:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:42.094 13:56:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:42.094 13:56:48 env -- common/autotest_common.sh@10 -- # set +x 00:27:42.094 ************************************ 00:27:42.094 START TEST env_memory 00:27:42.094 ************************************ 00:27:42.094 13:56:48 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:27:42.094 00:27:42.094 00:27:42.094 CUnit - A unit testing framework for C - Version 2.1-3 00:27:42.094 http://cunit.sourceforge.net/ 00:27:42.094 00:27:42.094 00:27:42.094 Suite: memory 00:27:42.094 Test: alloc and free memory map ...[2024-10-09 13:56:48.530767] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:27:42.094 passed 00:27:42.094 Test: mem map translation ...[2024-10-09 13:56:48.606829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:27:42.094 [2024-10-09 13:56:48.607122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:27:42.094 [2024-10-09 13:56:48.607390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:27:42.094 [2024-10-09 13:56:48.607761] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:27:42.353 passed 00:27:42.353 Test: mem map registration ...[2024-10-09 13:56:48.725375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:27:42.353 [2024-10-09 13:56:48.725716] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:27:42.353 passed 00:27:42.353 Test: mem map adjacent registrations ...passed 00:27:42.353 00:27:42.353 Run Summary: Type Total Ran Passed Failed Inactive 00:27:42.353 suites 1 1 n/a 0 0 00:27:42.353 tests 4 4 4 0 0 00:27:42.353 asserts 152 152 152 0 n/a 00:27:42.353 00:27:42.353 Elapsed time = 0.407 seconds 00:27:42.353 00:27:42.353 real 0m0.454s 00:27:42.353 user 0m0.410s 00:27:42.353 sys 0m0.033s 00:27:42.353 13:56:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:42.353 13:56:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:27:42.353 ************************************ 00:27:42.353 END TEST env_memory 00:27:42.353 ************************************ 00:27:42.610 13:56:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:42.610 13:56:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:42.610 13:56:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:42.610 13:56:48 env -- common/autotest_common.sh@10 -- # set +x 00:27:42.610 ************************************ 00:27:42.611 START TEST env_vtophys 00:27:42.611 ************************************ 00:27:42.611 13:56:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:27:42.611 EAL: lib.eal log level changed from notice to debug 00:27:42.611 EAL: Detected lcore 0 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 1 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 2 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 3 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 4 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 5 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 6 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 7 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 8 as core 0 on socket 0 00:27:42.611 EAL: Detected lcore 9 as core 0 on socket 0 00:27:42.611 EAL: Maximum logical cores by configuration: 128 00:27:42.611 EAL: Detected CPU lcores: 10 00:27:42.611 EAL: Detected NUMA nodes: 1 00:27:42.611 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:27:42.611 EAL: Detected shared linkage of DPDK 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:27:42.611 EAL: Registered [vdev] bus. 00:27:42.611 EAL: bus.vdev log level changed from disabled to notice 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:27:42.611 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:27:42.611 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:27:42.611 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:27:42.611 EAL: No shared files mode enabled, IPC will be disabled 00:27:42.611 EAL: No shared files mode enabled, IPC is disabled 00:27:42.611 EAL: Selected IOVA mode 'PA' 00:27:42.611 EAL: Probing VFIO support... 00:27:42.611 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:42.611 EAL: VFIO modules not loaded, skipping VFIO support... 00:27:42.611 EAL: Ask a virtual area of 0x2e000 bytes 00:27:42.611 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:27:42.611 EAL: Setting up physically contiguous memory... 00:27:42.611 EAL: Setting maximum number of open files to 524288 00:27:42.611 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:27:42.611 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:27:42.611 EAL: Ask a virtual area of 0x61000 bytes 00:27:42.611 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:27:42.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:42.611 EAL: Ask a virtual area of 0x400000000 bytes 00:27:42.611 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:27:42.611 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:27:42.611 EAL: Ask a virtual area of 0x61000 bytes 00:27:42.611 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:27:42.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:42.611 EAL: Ask a virtual area of 0x400000000 bytes 00:27:42.611 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:27:42.611 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:27:42.611 EAL: Ask a virtual area of 0x61000 bytes 00:27:42.611 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:27:42.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:42.611 EAL: Ask a virtual area of 0x400000000 bytes 00:27:42.611 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:27:42.611 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:27:42.611 EAL: Ask a virtual area of 0x61000 bytes 00:27:42.611 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:27:42.611 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:27:42.611 EAL: Ask a virtual area of 0x400000000 bytes 00:27:42.611 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:27:42.611 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:27:42.611 EAL: Hugepages will be freed exactly as allocated. 00:27:42.611 EAL: No shared files mode enabled, IPC is disabled 00:27:42.611 EAL: No shared files mode enabled, IPC is disabled 00:27:42.869 EAL: TSC frequency is ~2100000 KHz 00:27:42.869 EAL: Main lcore 0 is ready (tid=7f8cc863aa40;cpuset=[0]) 00:27:42.869 EAL: Trying to obtain current memory policy. 00:27:42.869 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:42.869 EAL: Restoring previous memory policy: 0 00:27:42.869 EAL: request: mp_malloc_sync 00:27:42.869 EAL: No shared files mode enabled, IPC is disabled 00:27:42.869 EAL: Heap on socket 0 was expanded by 2MB 00:27:42.869 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:27:42.869 EAL: No shared files mode enabled, IPC is disabled 00:27:42.869 EAL: No PCI address specified using 'addr=' in: bus=pci 00:27:42.869 EAL: Mem event callback 'spdk:(nil)' registered 00:27:42.869 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:27:42.869 00:27:42.869 00:27:42.869 CUnit - A unit testing framework for C - Version 2.1-3 00:27:42.869 http://cunit.sourceforge.net/ 00:27:42.869 00:27:42.869 00:27:42.869 Suite: components_suite 00:27:43.435 Test: vtophys_malloc_test ...passed 00:27:43.435 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:27:43.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.435 EAL: Restoring previous memory policy: 4 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was expanded by 4MB 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was shrunk by 4MB 00:27:43.435 EAL: Trying to obtain current memory policy. 00:27:43.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.435 EAL: Restoring previous memory policy: 4 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was expanded by 6MB 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was shrunk by 6MB 00:27:43.435 EAL: Trying to obtain current memory policy. 00:27:43.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.435 EAL: Restoring previous memory policy: 4 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was expanded by 10MB 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was shrunk by 10MB 00:27:43.435 EAL: Trying to obtain current memory policy. 00:27:43.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.435 EAL: Restoring previous memory policy: 4 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.435 EAL: request: mp_malloc_sync 00:27:43.435 EAL: No shared files mode enabled, IPC is disabled 00:27:43.435 EAL: Heap on socket 0 was expanded by 18MB 00:27:43.435 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was shrunk by 18MB 00:27:43.436 EAL: Trying to obtain current memory policy. 00:27:43.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.436 EAL: Restoring previous memory policy: 4 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was expanded by 34MB 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was shrunk by 34MB 00:27:43.436 EAL: Trying to obtain current memory policy. 00:27:43.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.436 EAL: Restoring previous memory policy: 4 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was expanded by 66MB 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was shrunk by 66MB 00:27:43.436 EAL: Trying to obtain current memory policy. 00:27:43.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.436 EAL: Restoring previous memory policy: 4 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was expanded by 130MB 00:27:43.436 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.436 EAL: request: mp_malloc_sync 00:27:43.436 EAL: No shared files mode enabled, IPC is disabled 00:27:43.436 EAL: Heap on socket 0 was shrunk by 130MB 00:27:43.436 EAL: Trying to obtain current memory policy. 00:27:43.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.694 EAL: Restoring previous memory policy: 4 00:27:43.694 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.694 EAL: request: mp_malloc_sync 00:27:43.694 EAL: No shared files mode enabled, IPC is disabled 00:27:43.694 EAL: Heap on socket 0 was expanded by 258MB 00:27:43.694 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.694 EAL: request: mp_malloc_sync 00:27:43.694 EAL: No shared files mode enabled, IPC is disabled 00:27:43.694 EAL: Heap on socket 0 was shrunk by 258MB 00:27:43.694 EAL: Trying to obtain current memory policy. 00:27:43.694 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:43.694 EAL: Restoring previous memory policy: 4 00:27:43.694 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.694 EAL: request: mp_malloc_sync 00:27:43.694 EAL: No shared files mode enabled, IPC is disabled 00:27:43.694 EAL: Heap on socket 0 was expanded by 514MB 00:27:43.951 EAL: Calling mem event callback 'spdk:(nil)' 00:27:43.951 EAL: request: mp_malloc_sync 00:27:43.951 EAL: No shared files mode enabled, IPC is disabled 00:27:43.951 EAL: Heap on socket 0 was shrunk by 514MB 00:27:43.951 EAL: Trying to obtain current memory policy. 00:27:43.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:27:44.210 EAL: Restoring previous memory policy: 4 00:27:44.210 EAL: Calling mem event callback 'spdk:(nil)' 00:27:44.210 EAL: request: mp_malloc_sync 00:27:44.210 EAL: No shared files mode enabled, IPC is disabled 00:27:44.210 EAL: Heap on socket 0 was expanded by 1026MB 00:27:44.468 EAL: Calling mem event callback 'spdk:(nil)' 00:27:44.468 passed 00:27:44.468 00:27:44.468 EAL: request: mp_malloc_sync 00:27:44.468 EAL: No shared files mode enabled, IPC is disabled 00:27:44.468 EAL: Heap on socket 0 was shrunk by 1026MB 00:27:44.468 Run Summary: Type Total Ran Passed Failed Inactive 00:27:44.468 suites 1 1 n/a 0 0 00:27:44.468 tests 2 2 2 0 0 00:27:44.468 asserts 5421 5421 5421 0 n/a 00:27:44.468 00:27:44.468 Elapsed time = 1.753 seconds 00:27:44.468 EAL: Calling mem event callback 'spdk:(nil)' 00:27:44.468 EAL: request: mp_malloc_sync 00:27:44.468 EAL: No shared files mode enabled, IPC is disabled 00:27:44.468 EAL: Heap on socket 0 was shrunk by 2MB 00:27:44.468 EAL: No shared files mode enabled, IPC is disabled 00:27:44.468 EAL: No shared files mode enabled, IPC is disabled 00:27:44.468 EAL: No shared files mode enabled, IPC is disabled 00:27:44.468 00:27:44.468 real 0m2.061s 00:27:44.468 user 0m0.949s 00:27:44.468 sys 0m0.971s 00:27:44.468 13:56:51 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.727 13:56:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 ************************************ 00:27:44.727 END TEST env_vtophys 00:27:44.727 ************************************ 00:27:44.727 13:56:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:44.727 13:56:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.727 13:56:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.727 13:56:51 env -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 ************************************ 00:27:44.727 START TEST env_pci 00:27:44.727 ************************************ 00:27:44.727 13:56:51 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:27:44.727 00:27:44.727 00:27:44.727 CUnit - A unit testing framework for C - Version 2.1-3 00:27:44.727 http://cunit.sourceforge.net/ 00:27:44.727 00:27:44.727 00:27:44.727 Suite: pci 00:27:44.727 Test: pci_hook ...[2024-10-09 13:56:51.109528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69183 has claimed it 00:27:44.727 passed 00:27:44.727 00:27:44.727 EAL: Cannot find device (10000:00:01.0) 00:27:44.727 EAL: Failed to attach device on primary process 00:27:44.727 Run Summary: Type Total Ran Passed Failed Inactive 00:27:44.727 suites 1 1 n/a 0 0 00:27:44.727 tests 1 1 1 0 0 00:27:44.727 asserts 25 25 25 0 n/a 00:27:44.727 00:27:44.727 Elapsed time = 0.007 seconds 00:27:44.727 00:27:44.727 real 0m0.089s 00:27:44.727 user 0m0.033s 00:27:44.727 sys 0m0.055s 00:27:44.727 13:56:51 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.727 13:56:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 ************************************ 00:27:44.727 END TEST env_pci 00:27:44.727 ************************************ 00:27:44.727 13:56:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:27:44.727 13:56:51 env -- env/env.sh@15 -- # uname 00:27:44.727 13:56:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:27:44.727 13:56:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:27:44.727 13:56:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:44.727 13:56:51 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:44.727 13:56:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.727 13:56:51 env -- common/autotest_common.sh@10 -- # set +x 00:27:44.727 ************************************ 00:27:44.727 START TEST env_dpdk_post_init 00:27:44.727 ************************************ 00:27:44.727 13:56:51 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:27:44.986 EAL: Detected CPU lcores: 10 00:27:44.986 EAL: Detected NUMA nodes: 1 00:27:44.986 EAL: Detected shared linkage of DPDK 00:27:44.986 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:44.986 EAL: Selected IOVA mode 'PA' 00:27:44.986 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:27:44.986 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:27:44.986 Starting DPDK initialization... 00:27:44.986 Starting SPDK post initialization... 00:27:44.986 SPDK NVMe probe 00:27:44.986 Attaching to 0000:00:10.0 00:27:44.986 Attaching to 0000:00:11.0 00:27:44.986 Attached to 0000:00:10.0 00:27:44.986 Attached to 0000:00:11.0 00:27:44.986 Cleaning up... 00:27:44.986 00:27:44.986 real 0m0.303s 00:27:44.986 user 0m0.076s 00:27:44.986 sys 0m0.126s 00:27:44.986 13:56:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.986 13:56:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:27:44.986 ************************************ 00:27:44.986 END TEST env_dpdk_post_init 00:27:44.986 ************************************ 00:27:45.245 13:56:51 env -- env/env.sh@26 -- # uname 00:27:45.245 13:56:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:27:45.245 13:56:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:45.245 13:56:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:45.245 13:56:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:45.245 13:56:51 env -- common/autotest_common.sh@10 -- # set +x 00:27:45.245 ************************************ 00:27:45.245 START TEST env_mem_callbacks 00:27:45.245 ************************************ 00:27:45.245 13:56:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:27:45.245 EAL: Detected CPU lcores: 10 00:27:45.245 EAL: Detected NUMA nodes: 1 00:27:45.245 EAL: Detected shared linkage of DPDK 00:27:45.245 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:27:45.245 EAL: Selected IOVA mode 'PA' 00:27:45.245 TELEMETRY: No legacy callbacks, legacy socket not created 00:27:45.245 00:27:45.245 00:27:45.245 CUnit - A unit testing framework for C - Version 2.1-3 00:27:45.245 http://cunit.sourceforge.net/ 00:27:45.245 00:27:45.245 00:27:45.245 Suite: memory 00:27:45.245 Test: test ... 00:27:45.245 register 0x200000200000 2097152 00:27:45.245 malloc 3145728 00:27:45.245 register 0x200000400000 4194304 00:27:45.245 buf 0x200000500000 len 3145728 PASSED 00:27:45.245 malloc 64 00:27:45.245 buf 0x2000004fff40 len 64 PASSED 00:27:45.245 malloc 4194304 00:27:45.245 register 0x200000800000 6291456 00:27:45.245 buf 0x200000a00000 len 4194304 PASSED 00:27:45.245 free 0x200000500000 3145728 00:27:45.245 free 0x2000004fff40 64 00:27:45.245 unregister 0x200000400000 4194304 PASSED 00:27:45.245 free 0x200000a00000 4194304 00:27:45.245 unregister 0x200000800000 6291456 PASSED 00:27:45.245 malloc 8388608 00:27:45.245 register 0x200000400000 10485760 00:27:45.245 buf 0x200000600000 len 8388608 PASSED 00:27:45.245 free 0x200000600000 8388608 00:27:45.245 unregister 0x200000400000 10485760 PASSED 00:27:45.245 passed 00:27:45.245 00:27:45.245 Run Summary: Type Total Ran Passed Failed Inactive 00:27:45.245 suites 1 1 n/a 0 0 00:27:45.245 tests 1 1 1 0 0 00:27:45.245 asserts 15 15 15 0 n/a 00:27:45.245 00:27:45.245 Elapsed time = 0.012 seconds 00:27:45.503 00:27:45.503 real 0m0.221s 00:27:45.503 user 0m0.047s 00:27:45.503 sys 0m0.073s 00:27:45.503 13:56:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:45.503 13:56:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 ************************************ 00:27:45.503 END TEST env_mem_callbacks 00:27:45.503 ************************************ 00:27:45.503 ************************************ 00:27:45.503 END TEST env 00:27:45.503 ************************************ 00:27:45.503 00:27:45.503 real 0m3.624s 00:27:45.503 user 0m1.748s 00:27:45.503 sys 0m1.524s 00:27:45.503 13:56:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:45.503 13:56:51 env -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 13:56:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:45.503 13:56:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:45.503 13:56:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:45.503 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:27:45.503 ************************************ 00:27:45.503 START TEST rpc 00:27:45.503 ************************************ 00:27:45.503 13:56:51 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:27:45.503 * Looking for test storage... 00:27:45.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:45.503 13:56:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:45.503 13:56:52 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:45.503 13:56:52 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.762 13:56:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.762 13:56:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.762 13:56:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.762 13:56:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.762 13:56:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.762 13:56:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:45.762 13:56:52 rpc -- scripts/common.sh@345 -- # : 1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.762 13:56:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.762 13:56:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@353 -- # local d=1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.762 13:56:52 rpc -- scripts/common.sh@355 -- # echo 1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.762 13:56:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@353 -- # local d=2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.762 13:56:52 rpc -- scripts/common.sh@355 -- # echo 2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.762 13:56:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.762 13:56:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.762 13:56:52 rpc -- scripts/common.sh@368 -- # return 0 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:45.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.762 --rc genhtml_branch_coverage=1 00:27:45.762 --rc genhtml_function_coverage=1 00:27:45.762 --rc genhtml_legend=1 00:27:45.762 --rc geninfo_all_blocks=1 00:27:45.762 --rc geninfo_unexecuted_blocks=1 00:27:45.762 00:27:45.762 ' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:45.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.762 --rc genhtml_branch_coverage=1 00:27:45.762 --rc genhtml_function_coverage=1 00:27:45.762 --rc genhtml_legend=1 00:27:45.762 --rc geninfo_all_blocks=1 00:27:45.762 --rc geninfo_unexecuted_blocks=1 00:27:45.762 00:27:45.762 ' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:45.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.762 --rc genhtml_branch_coverage=1 00:27:45.762 --rc genhtml_function_coverage=1 00:27:45.762 --rc genhtml_legend=1 00:27:45.762 --rc geninfo_all_blocks=1 00:27:45.762 --rc geninfo_unexecuted_blocks=1 00:27:45.762 00:27:45.762 ' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:45.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.762 --rc genhtml_branch_coverage=1 00:27:45.762 --rc genhtml_function_coverage=1 00:27:45.762 --rc genhtml_legend=1 00:27:45.762 --rc geninfo_all_blocks=1 00:27:45.762 --rc geninfo_unexecuted_blocks=1 00:27:45.762 00:27:45.762 ' 00:27:45.762 13:56:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69311 00:27:45.762 13:56:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:45.762 13:56:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:27:45.762 13:56:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69311 00:27:45.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@831 -- # '[' -z 69311 ']' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.762 13:56:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:45.762 [2024-10-09 13:56:52.256078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:45.762 [2024-10-09 13:56:52.256274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69311 ] 00:27:46.021 [2024-10-09 13:56:52.438867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.021 [2024-10-09 13:56:52.487471] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:27:46.021 [2024-10-09 13:56:52.487538] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69311' to capture a snapshot of events at runtime. 00:27:46.021 [2024-10-09 13:56:52.487570] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.021 [2024-10-09 13:56:52.487583] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.021 [2024-10-09 13:56:52.487600] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69311 for offline analysis/debug. 00:27:46.021 [2024-10-09 13:56:52.487664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.956 13:56:53 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.956 13:56:53 rpc -- common/autotest_common.sh@864 -- # return 0 00:27:46.956 13:56:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:46.956 13:56:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:27:46.956 13:56:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:27:46.956 13:56:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:27:46.956 13:56:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:46.956 13:56:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:46.956 13:56:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:46.956 ************************************ 00:27:46.956 START TEST rpc_integrity 00:27:46.956 ************************************ 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.956 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.956 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:46.956 { 00:27:46.956 "name": "Malloc0", 00:27:46.956 "aliases": [ 00:27:46.956 "3838f88f-336a-4734-9b65-55f700643bc2" 00:27:46.956 ], 00:27:46.956 "product_name": "Malloc disk", 00:27:46.956 "block_size": 512, 00:27:46.956 "num_blocks": 16384, 00:27:46.956 "uuid": "3838f88f-336a-4734-9b65-55f700643bc2", 00:27:46.956 "assigned_rate_limits": { 00:27:46.956 "rw_ios_per_sec": 0, 00:27:46.956 "rw_mbytes_per_sec": 0, 00:27:46.956 "r_mbytes_per_sec": 0, 00:27:46.956 "w_mbytes_per_sec": 0 00:27:46.956 }, 00:27:46.956 "claimed": false, 00:27:46.956 "zoned": false, 00:27:46.956 "supported_io_types": { 00:27:46.956 "read": true, 00:27:46.956 "write": true, 00:27:46.956 "unmap": true, 00:27:46.956 "flush": true, 00:27:46.956 "reset": true, 00:27:46.956 "nvme_admin": false, 00:27:46.956 "nvme_io": false, 00:27:46.956 "nvme_io_md": false, 00:27:46.956 "write_zeroes": true, 00:27:46.956 "zcopy": true, 00:27:46.956 "get_zone_info": false, 00:27:46.956 "zone_management": false, 00:27:46.956 "zone_append": false, 00:27:46.956 "compare": false, 00:27:46.956 "compare_and_write": false, 00:27:46.956 "abort": true, 00:27:46.956 "seek_hole": false, 00:27:46.956 "seek_data": false, 00:27:46.956 "copy": true, 00:27:46.956 "nvme_iov_md": false 00:27:46.956 }, 00:27:46.956 "memory_domains": [ 00:27:46.956 { 00:27:46.956 "dma_device_id": "system", 00:27:46.956 "dma_device_type": 1 00:27:46.956 }, 00:27:46.956 { 00:27:46.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.956 "dma_device_type": 2 00:27:46.956 } 00:27:46.956 ], 00:27:46.956 "driver_specific": {} 00:27:46.956 } 00:27:46.956 ]' 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.957 [2024-10-09 13:56:53.380934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:27:46.957 [2024-10-09 13:56:53.381043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.957 [2024-10-09 13:56:53.381084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:46.957 [2024-10-09 13:56:53.381098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.957 [2024-10-09 13:56:53.384345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.957 [2024-10-09 13:56:53.384409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:46.957 Passthru0 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:46.957 { 00:27:46.957 "name": "Malloc0", 00:27:46.957 "aliases": [ 00:27:46.957 "3838f88f-336a-4734-9b65-55f700643bc2" 00:27:46.957 ], 00:27:46.957 "product_name": "Malloc disk", 00:27:46.957 "block_size": 512, 00:27:46.957 "num_blocks": 16384, 00:27:46.957 "uuid": "3838f88f-336a-4734-9b65-55f700643bc2", 00:27:46.957 "assigned_rate_limits": { 00:27:46.957 "rw_ios_per_sec": 0, 00:27:46.957 "rw_mbytes_per_sec": 0, 00:27:46.957 "r_mbytes_per_sec": 0, 00:27:46.957 "w_mbytes_per_sec": 0 00:27:46.957 }, 00:27:46.957 "claimed": true, 00:27:46.957 "claim_type": "exclusive_write", 00:27:46.957 "zoned": false, 00:27:46.957 "supported_io_types": { 00:27:46.957 "read": true, 00:27:46.957 "write": true, 00:27:46.957 "unmap": true, 00:27:46.957 "flush": true, 00:27:46.957 "reset": true, 00:27:46.957 "nvme_admin": false, 00:27:46.957 "nvme_io": false, 00:27:46.957 "nvme_io_md": false, 00:27:46.957 "write_zeroes": true, 00:27:46.957 "zcopy": true, 00:27:46.957 "get_zone_info": false, 00:27:46.957 "zone_management": false, 00:27:46.957 "zone_append": false, 00:27:46.957 "compare": false, 00:27:46.957 "compare_and_write": false, 00:27:46.957 "abort": true, 00:27:46.957 "seek_hole": false, 00:27:46.957 "seek_data": false, 00:27:46.957 "copy": true, 00:27:46.957 "nvme_iov_md": false 00:27:46.957 }, 00:27:46.957 "memory_domains": [ 00:27:46.957 { 00:27:46.957 "dma_device_id": "system", 00:27:46.957 "dma_device_type": 1 00:27:46.957 }, 00:27:46.957 { 00:27:46.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.957 "dma_device_type": 2 00:27:46.957 } 00:27:46.957 ], 00:27:46.957 "driver_specific": {} 00:27:46.957 }, 00:27:46.957 { 00:27:46.957 "name": "Passthru0", 00:27:46.957 "aliases": [ 00:27:46.957 "68422593-4a2a-519d-b752-8ebc11c06df9" 00:27:46.957 ], 00:27:46.957 "product_name": "passthru", 00:27:46.957 "block_size": 512, 00:27:46.957 "num_blocks": 16384, 00:27:46.957 "uuid": "68422593-4a2a-519d-b752-8ebc11c06df9", 00:27:46.957 "assigned_rate_limits": { 00:27:46.957 "rw_ios_per_sec": 0, 00:27:46.957 "rw_mbytes_per_sec": 0, 00:27:46.957 "r_mbytes_per_sec": 0, 00:27:46.957 "w_mbytes_per_sec": 0 00:27:46.957 }, 00:27:46.957 "claimed": false, 00:27:46.957 "zoned": false, 00:27:46.957 "supported_io_types": { 00:27:46.957 "read": true, 00:27:46.957 "write": true, 00:27:46.957 "unmap": true, 00:27:46.957 "flush": true, 00:27:46.957 "reset": true, 00:27:46.957 "nvme_admin": false, 00:27:46.957 "nvme_io": false, 00:27:46.957 "nvme_io_md": false, 00:27:46.957 "write_zeroes": true, 00:27:46.957 "zcopy": true, 00:27:46.957 "get_zone_info": false, 00:27:46.957 "zone_management": false, 00:27:46.957 "zone_append": false, 00:27:46.957 "compare": false, 00:27:46.957 "compare_and_write": false, 00:27:46.957 "abort": true, 00:27:46.957 "seek_hole": false, 00:27:46.957 "seek_data": false, 00:27:46.957 "copy": true, 00:27:46.957 "nvme_iov_md": false 00:27:46.957 }, 00:27:46.957 "memory_domains": [ 00:27:46.957 { 00:27:46.957 "dma_device_id": "system", 00:27:46.957 "dma_device_type": 1 00:27:46.957 }, 00:27:46.957 { 00:27:46.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:46.957 "dma_device_type": 2 00:27:46.957 } 00:27:46.957 ], 00:27:46.957 "driver_specific": { 00:27:46.957 "passthru": { 00:27:46.957 "name": "Passthru0", 00:27:46.957 "base_bdev_name": "Malloc0" 00:27:46.957 } 00:27:46.957 } 00:27:46.957 } 00:27:46.957 ]' 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:46.957 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.957 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:47.215 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:47.215 ************************************ 00:27:47.215 END TEST rpc_integrity 00:27:47.215 ************************************ 00:27:47.215 13:56:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:47.215 00:27:47.215 real 0m0.338s 00:27:47.215 user 0m0.204s 00:27:47.215 sys 0m0.060s 00:27:47.215 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.215 13:56:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.215 13:56:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:27:47.215 13:56:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:47.215 13:56:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.215 13:56:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.215 ************************************ 00:27:47.215 START TEST rpc_plugins 00:27:47.215 ************************************ 00:27:47.215 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:27:47.215 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:27:47.215 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.215 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:47.215 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.215 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:27:47.215 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:27:47.215 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:27:47.216 { 00:27:47.216 "name": "Malloc1", 00:27:47.216 "aliases": [ 00:27:47.216 "a8ea94ea-151e-41ec-8b75-6fc0341fadc6" 00:27:47.216 ], 00:27:47.216 "product_name": "Malloc disk", 00:27:47.216 "block_size": 4096, 00:27:47.216 "num_blocks": 256, 00:27:47.216 "uuid": "a8ea94ea-151e-41ec-8b75-6fc0341fadc6", 00:27:47.216 "assigned_rate_limits": { 00:27:47.216 "rw_ios_per_sec": 0, 00:27:47.216 "rw_mbytes_per_sec": 0, 00:27:47.216 "r_mbytes_per_sec": 0, 00:27:47.216 "w_mbytes_per_sec": 0 00:27:47.216 }, 00:27:47.216 "claimed": false, 00:27:47.216 "zoned": false, 00:27:47.216 "supported_io_types": { 00:27:47.216 "read": true, 00:27:47.216 "write": true, 00:27:47.216 "unmap": true, 00:27:47.216 "flush": true, 00:27:47.216 "reset": true, 00:27:47.216 "nvme_admin": false, 00:27:47.216 "nvme_io": false, 00:27:47.216 "nvme_io_md": false, 00:27:47.216 "write_zeroes": true, 00:27:47.216 "zcopy": true, 00:27:47.216 "get_zone_info": false, 00:27:47.216 "zone_management": false, 00:27:47.216 "zone_append": false, 00:27:47.216 "compare": false, 00:27:47.216 "compare_and_write": false, 00:27:47.216 "abort": true, 00:27:47.216 "seek_hole": false, 00:27:47.216 "seek_data": false, 00:27:47.216 "copy": true, 00:27:47.216 "nvme_iov_md": false 00:27:47.216 }, 00:27:47.216 "memory_domains": [ 00:27:47.216 { 00:27:47.216 "dma_device_id": "system", 00:27:47.216 "dma_device_type": 1 00:27:47.216 }, 00:27:47.216 { 00:27:47.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.216 "dma_device_type": 2 00:27:47.216 } 00:27:47.216 ], 00:27:47.216 "driver_specific": {} 00:27:47.216 } 00:27:47.216 ]' 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:47.216 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:27:47.216 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:27:47.475 ************************************ 00:27:47.475 END TEST rpc_plugins 00:27:47.475 ************************************ 00:27:47.475 13:56:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:27:47.475 00:27:47.475 real 0m0.156s 00:27:47.475 user 0m0.094s 00:27:47.475 sys 0m0.024s 00:27:47.475 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.475 13:56:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:27:47.475 13:56:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:27:47.475 13:56:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:47.475 13:56:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.475 13:56:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.475 ************************************ 00:27:47.475 START TEST rpc_trace_cmd_test 00:27:47.475 ************************************ 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:27:47.475 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69311", 00:27:47.475 "tpoint_group_mask": "0x8", 00:27:47.475 "iscsi_conn": { 00:27:47.475 "mask": "0x2", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "scsi": { 00:27:47.475 "mask": "0x4", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "bdev": { 00:27:47.475 "mask": "0x8", 00:27:47.475 "tpoint_mask": "0xffffffffffffffff" 00:27:47.475 }, 00:27:47.475 "nvmf_rdma": { 00:27:47.475 "mask": "0x10", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "nvmf_tcp": { 00:27:47.475 "mask": "0x20", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "ftl": { 00:27:47.475 "mask": "0x40", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "blobfs": { 00:27:47.475 "mask": "0x80", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "dsa": { 00:27:47.475 "mask": "0x200", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "thread": { 00:27:47.475 "mask": "0x400", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "nvme_pcie": { 00:27:47.475 "mask": "0x800", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "iaa": { 00:27:47.475 "mask": "0x1000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "nvme_tcp": { 00:27:47.475 "mask": "0x2000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "bdev_nvme": { 00:27:47.475 "mask": "0x4000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "sock": { 00:27:47.475 "mask": "0x8000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "blob": { 00:27:47.475 "mask": "0x10000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 }, 00:27:47.475 "bdev_raid": { 00:27:47.475 "mask": "0x20000", 00:27:47.475 "tpoint_mask": "0x0" 00:27:47.475 } 00:27:47.475 }' 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:27:47.475 13:56:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:27:47.475 13:56:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:27:47.475 13:56:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:27:47.735 13:56:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:27:47.735 13:56:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:27:47.735 ************************************ 00:27:47.735 END TEST rpc_trace_cmd_test 00:27:47.735 ************************************ 00:27:47.735 13:56:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:27:47.735 00:27:47.735 real 0m0.273s 00:27:47.735 user 0m0.220s 00:27:47.735 sys 0m0.041s 00:27:47.735 13:56:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.735 13:56:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.735 13:56:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:27:47.735 13:56:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:27:47.735 13:56:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:27:47.735 13:56:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:47.735 13:56:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:47.735 13:56:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:47.735 ************************************ 00:27:47.735 START TEST rpc_daemon_integrity 00:27:47.735 ************************************ 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:27:47.735 { 00:27:47.735 "name": "Malloc2", 00:27:47.735 "aliases": [ 00:27:47.735 "fe13a626-d645-4648-8f7f-7db0fe2f2062" 00:27:47.735 ], 00:27:47.735 "product_name": "Malloc disk", 00:27:47.735 "block_size": 512, 00:27:47.735 "num_blocks": 16384, 00:27:47.735 "uuid": "fe13a626-d645-4648-8f7f-7db0fe2f2062", 00:27:47.735 "assigned_rate_limits": { 00:27:47.735 "rw_ios_per_sec": 0, 00:27:47.735 "rw_mbytes_per_sec": 0, 00:27:47.735 "r_mbytes_per_sec": 0, 00:27:47.735 "w_mbytes_per_sec": 0 00:27:47.735 }, 00:27:47.735 "claimed": false, 00:27:47.735 "zoned": false, 00:27:47.735 "supported_io_types": { 00:27:47.735 "read": true, 00:27:47.735 "write": true, 00:27:47.735 "unmap": true, 00:27:47.735 "flush": true, 00:27:47.735 "reset": true, 00:27:47.735 "nvme_admin": false, 00:27:47.735 "nvme_io": false, 00:27:47.735 "nvme_io_md": false, 00:27:47.735 "write_zeroes": true, 00:27:47.735 "zcopy": true, 00:27:47.735 "get_zone_info": false, 00:27:47.735 "zone_management": false, 00:27:47.735 "zone_append": false, 00:27:47.735 "compare": false, 00:27:47.735 "compare_and_write": false, 00:27:47.735 "abort": true, 00:27:47.735 "seek_hole": false, 00:27:47.735 "seek_data": false, 00:27:47.735 "copy": true, 00:27:47.735 "nvme_iov_md": false 00:27:47.735 }, 00:27:47.735 "memory_domains": [ 00:27:47.735 { 00:27:47.735 "dma_device_id": "system", 00:27:47.735 "dma_device_type": 1 00:27:47.735 }, 00:27:47.735 { 00:27:47.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.735 "dma_device_type": 2 00:27:47.735 } 00:27:47.735 ], 00:27:47.735 "driver_specific": {} 00:27:47.735 } 00:27:47.735 ]' 00:27:47.735 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.995 [2024-10-09 13:56:54.307612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:27:47.995 [2024-10-09 13:56:54.307702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.995 [2024-10-09 13:56:54.307763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:47.995 [2024-10-09 13:56:54.307788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.995 [2024-10-09 13:56:54.311185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.995 Passthru0 00:27:47.995 [2024-10-09 13:56:54.311365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:27:47.995 { 00:27:47.995 "name": "Malloc2", 00:27:47.995 "aliases": [ 00:27:47.995 "fe13a626-d645-4648-8f7f-7db0fe2f2062" 00:27:47.995 ], 00:27:47.995 "product_name": "Malloc disk", 00:27:47.995 "block_size": 512, 00:27:47.995 "num_blocks": 16384, 00:27:47.995 "uuid": "fe13a626-d645-4648-8f7f-7db0fe2f2062", 00:27:47.995 "assigned_rate_limits": { 00:27:47.995 "rw_ios_per_sec": 0, 00:27:47.995 "rw_mbytes_per_sec": 0, 00:27:47.995 "r_mbytes_per_sec": 0, 00:27:47.995 "w_mbytes_per_sec": 0 00:27:47.995 }, 00:27:47.995 "claimed": true, 00:27:47.995 "claim_type": "exclusive_write", 00:27:47.995 "zoned": false, 00:27:47.995 "supported_io_types": { 00:27:47.995 "read": true, 00:27:47.995 "write": true, 00:27:47.995 "unmap": true, 00:27:47.995 "flush": true, 00:27:47.995 "reset": true, 00:27:47.995 "nvme_admin": false, 00:27:47.995 "nvme_io": false, 00:27:47.995 "nvme_io_md": false, 00:27:47.995 "write_zeroes": true, 00:27:47.995 "zcopy": true, 00:27:47.995 "get_zone_info": false, 00:27:47.995 "zone_management": false, 00:27:47.995 "zone_append": false, 00:27:47.995 "compare": false, 00:27:47.995 "compare_and_write": false, 00:27:47.995 "abort": true, 00:27:47.995 "seek_hole": false, 00:27:47.995 "seek_data": false, 00:27:47.995 "copy": true, 00:27:47.995 "nvme_iov_md": false 00:27:47.995 }, 00:27:47.995 "memory_domains": [ 00:27:47.995 { 00:27:47.995 "dma_device_id": "system", 00:27:47.995 "dma_device_type": 1 00:27:47.995 }, 00:27:47.995 { 00:27:47.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.995 "dma_device_type": 2 00:27:47.995 } 00:27:47.995 ], 00:27:47.995 "driver_specific": {} 00:27:47.995 }, 00:27:47.995 { 00:27:47.995 "name": "Passthru0", 00:27:47.995 "aliases": [ 00:27:47.995 "905a96e2-cddf-544a-947f-154d57c5b345" 00:27:47.995 ], 00:27:47.995 "product_name": "passthru", 00:27:47.995 "block_size": 512, 00:27:47.995 "num_blocks": 16384, 00:27:47.995 "uuid": "905a96e2-cddf-544a-947f-154d57c5b345", 00:27:47.995 "assigned_rate_limits": { 00:27:47.995 "rw_ios_per_sec": 0, 00:27:47.995 "rw_mbytes_per_sec": 0, 00:27:47.995 "r_mbytes_per_sec": 0, 00:27:47.995 "w_mbytes_per_sec": 0 00:27:47.995 }, 00:27:47.995 "claimed": false, 00:27:47.995 "zoned": false, 00:27:47.995 "supported_io_types": { 00:27:47.995 "read": true, 00:27:47.995 "write": true, 00:27:47.995 "unmap": true, 00:27:47.995 "flush": true, 00:27:47.995 "reset": true, 00:27:47.995 "nvme_admin": false, 00:27:47.995 "nvme_io": false, 00:27:47.995 "nvme_io_md": false, 00:27:47.995 "write_zeroes": true, 00:27:47.995 "zcopy": true, 00:27:47.995 "get_zone_info": false, 00:27:47.995 "zone_management": false, 00:27:47.995 "zone_append": false, 00:27:47.995 "compare": false, 00:27:47.995 "compare_and_write": false, 00:27:47.995 "abort": true, 00:27:47.995 "seek_hole": false, 00:27:47.995 "seek_data": false, 00:27:47.995 "copy": true, 00:27:47.995 "nvme_iov_md": false 00:27:47.995 }, 00:27:47.995 "memory_domains": [ 00:27:47.995 { 00:27:47.995 "dma_device_id": "system", 00:27:47.995 "dma_device_type": 1 00:27:47.995 }, 00:27:47.995 { 00:27:47.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.995 "dma_device_type": 2 00:27:47.995 } 00:27:47.995 ], 00:27:47.995 "driver_specific": { 00:27:47.995 "passthru": { 00:27:47.995 "name": "Passthru0", 00:27:47.995 "base_bdev_name": "Malloc2" 00:27:47.995 } 00:27:47.995 } 00:27:47.995 } 00:27:47.995 ]' 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.995 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:27:47.996 ************************************ 00:27:47.996 END TEST rpc_daemon_integrity 00:27:47.996 ************************************ 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:27:47.996 00:27:47.996 real 0m0.312s 00:27:47.996 user 0m0.189s 00:27:47.996 sys 0m0.056s 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.996 13:56:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:27:47.996 13:56:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:47.996 13:56:54 rpc -- rpc/rpc.sh@84 -- # killprocess 69311 00:27:47.996 13:56:54 rpc -- common/autotest_common.sh@950 -- # '[' -z 69311 ']' 00:27:47.996 13:56:54 rpc -- common/autotest_common.sh@954 -- # kill -0 69311 00:27:47.996 13:56:54 rpc -- common/autotest_common.sh@955 -- # uname 00:27:47.996 13:56:54 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.996 13:56:54 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69311 00:27:48.255 killing process with pid 69311 00:27:48.255 13:56:54 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:48.255 13:56:54 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:48.255 13:56:54 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69311' 00:27:48.255 13:56:54 rpc -- common/autotest_common.sh@969 -- # kill 69311 00:27:48.255 13:56:54 rpc -- common/autotest_common.sh@974 -- # wait 69311 00:27:48.514 00:27:48.514 real 0m3.117s 00:27:48.514 user 0m3.807s 00:27:48.514 sys 0m0.929s 00:27:48.514 13:56:55 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.514 13:56:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:27:48.514 ************************************ 00:27:48.514 END TEST rpc 00:27:48.514 ************************************ 00:27:48.514 13:56:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:48.514 13:56:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:48.514 13:56:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.514 13:56:55 -- common/autotest_common.sh@10 -- # set +x 00:27:48.772 ************************************ 00:27:48.772 START TEST skip_rpc 00:27:48.772 ************************************ 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:27:48.772 * Looking for test storage... 00:27:48.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.772 13:56:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.772 --rc genhtml_branch_coverage=1 00:27:48.772 --rc genhtml_function_coverage=1 00:27:48.772 --rc genhtml_legend=1 00:27:48.772 --rc geninfo_all_blocks=1 00:27:48.772 --rc geninfo_unexecuted_blocks=1 00:27:48.772 00:27:48.772 ' 00:27:48.772 13:56:55 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:48.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.772 --rc genhtml_branch_coverage=1 00:27:48.772 --rc genhtml_function_coverage=1 00:27:48.772 --rc genhtml_legend=1 00:27:48.772 --rc geninfo_all_blocks=1 00:27:48.772 --rc geninfo_unexecuted_blocks=1 00:27:48.772 00:27:48.772 ' 00:27:48.773 13:56:55 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:48.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.773 --rc genhtml_branch_coverage=1 00:27:48.773 --rc genhtml_function_coverage=1 00:27:48.773 --rc genhtml_legend=1 00:27:48.773 --rc geninfo_all_blocks=1 00:27:48.773 --rc geninfo_unexecuted_blocks=1 00:27:48.773 00:27:48.773 ' 00:27:48.773 13:56:55 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:48.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.773 --rc genhtml_branch_coverage=1 00:27:48.773 --rc genhtml_function_coverage=1 00:27:48.773 --rc genhtml_legend=1 00:27:48.773 --rc geninfo_all_blocks=1 00:27:48.773 --rc geninfo_unexecuted_blocks=1 00:27:48.773 00:27:48.773 ' 00:27:48.773 13:56:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:48.773 13:56:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:27:48.773 13:56:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:27:48.773 13:56:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:48.773 13:56:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.773 13:56:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:48.773 ************************************ 00:27:48.773 START TEST skip_rpc 00:27:48.773 ************************************ 00:27:48.773 13:56:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:27:48.773 13:56:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69522 00:27:48.773 13:56:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:48.773 13:56:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:27:48.773 13:56:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:27:49.031 [2024-10-09 13:56:55.438394] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:49.031 [2024-10-09 13:56:55.438860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69522 ] 00:27:49.289 [2024-10-09 13:56:55.624529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.289 [2024-10-09 13:56:55.688022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69522 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69522 ']' 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69522 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69522 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:54.553 killing process with pid 69522 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69522' 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69522 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69522 00:27:54.553 00:27:54.553 real 0m5.497s 00:27:54.553 user 0m4.982s 00:27:54.553 sys 0m0.429s 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:54.553 13:57:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:54.553 ************************************ 00:27:54.553 END TEST skip_rpc 00:27:54.553 ************************************ 00:27:54.553 13:57:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:27:54.553 13:57:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:54.553 13:57:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:54.553 13:57:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:54.553 ************************************ 00:27:54.553 START TEST skip_rpc_with_json 00:27:54.553 ************************************ 00:27:54.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69604 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69604 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69604 ']' 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:54.553 13:57:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:54.553 [2024-10-09 13:57:00.991910] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:54.553 [2024-10-09 13:57:00.993125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69604 ] 00:27:54.813 [2024-10-09 13:57:01.181479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.813 [2024-10-09 13:57:01.241916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:55.746 [2024-10-09 13:57:01.986951] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:27:55.746 request: 00:27:55.746 { 00:27:55.746 "trtype": "tcp", 00:27:55.746 "method": "nvmf_get_transports", 00:27:55.746 "req_id": 1 00:27:55.746 } 00:27:55.746 Got JSON-RPC error response 00:27:55.746 response: 00:27:55.746 { 00:27:55.746 "code": -19, 00:27:55.746 "message": "No such device" 00:27:55.746 } 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.746 13:57:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:55.746 [2024-10-09 13:57:01.999086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.746 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:55.746 { 00:27:55.746 "subsystems": [ 00:27:55.746 { 00:27:55.746 "subsystem": "fsdev", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "fsdev_set_opts", 00:27:55.746 "params": { 00:27:55.746 "fsdev_io_pool_size": 65535, 00:27:55.746 "fsdev_io_cache_size": 256 00:27:55.746 } 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "keyring", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "iobuf", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "iobuf_set_options", 00:27:55.746 "params": { 00:27:55.746 "small_pool_count": 8192, 00:27:55.746 "large_pool_count": 1024, 00:27:55.746 "small_bufsize": 8192, 00:27:55.746 "large_bufsize": 135168 00:27:55.746 } 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "sock", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "sock_set_default_impl", 00:27:55.746 "params": { 00:27:55.746 "impl_name": "posix" 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "sock_impl_set_options", 00:27:55.746 "params": { 00:27:55.746 "impl_name": "ssl", 00:27:55.746 "recv_buf_size": 4096, 00:27:55.746 "send_buf_size": 4096, 00:27:55.746 "enable_recv_pipe": true, 00:27:55.746 "enable_quickack": false, 00:27:55.746 "enable_placement_id": 0, 00:27:55.746 "enable_zerocopy_send_server": true, 00:27:55.746 "enable_zerocopy_send_client": false, 00:27:55.746 "zerocopy_threshold": 0, 00:27:55.746 "tls_version": 0, 00:27:55.746 "enable_ktls": false 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "sock_impl_set_options", 00:27:55.746 "params": { 00:27:55.746 "impl_name": "posix", 00:27:55.746 "recv_buf_size": 2097152, 00:27:55.746 "send_buf_size": 2097152, 00:27:55.746 "enable_recv_pipe": true, 00:27:55.746 "enable_quickack": false, 00:27:55.746 "enable_placement_id": 0, 00:27:55.746 "enable_zerocopy_send_server": true, 00:27:55.746 "enable_zerocopy_send_client": false, 00:27:55.746 "zerocopy_threshold": 0, 00:27:55.746 "tls_version": 0, 00:27:55.746 "enable_ktls": false 00:27:55.746 } 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "vmd", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "accel", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "accel_set_options", 00:27:55.746 "params": { 00:27:55.746 "small_cache_size": 128, 00:27:55.746 "large_cache_size": 16, 00:27:55.746 "task_count": 2048, 00:27:55.746 "sequence_count": 2048, 00:27:55.746 "buf_count": 2048 00:27:55.746 } 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "bdev", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "bdev_set_options", 00:27:55.746 "params": { 00:27:55.746 "bdev_io_pool_size": 65535, 00:27:55.746 "bdev_io_cache_size": 256, 00:27:55.746 "bdev_auto_examine": true, 00:27:55.746 "iobuf_small_cache_size": 128, 00:27:55.746 "iobuf_large_cache_size": 16 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "bdev_raid_set_options", 00:27:55.746 "params": { 00:27:55.746 "process_window_size_kb": 1024, 00:27:55.746 "process_max_bandwidth_mb_sec": 0 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "bdev_iscsi_set_options", 00:27:55.746 "params": { 00:27:55.746 "timeout_sec": 30 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "bdev_nvme_set_options", 00:27:55.746 "params": { 00:27:55.746 "action_on_timeout": "none", 00:27:55.746 "timeout_us": 0, 00:27:55.746 "timeout_admin_us": 0, 00:27:55.746 "keep_alive_timeout_ms": 10000, 00:27:55.746 "arbitration_burst": 0, 00:27:55.746 "low_priority_weight": 0, 00:27:55.746 "medium_priority_weight": 0, 00:27:55.746 "high_priority_weight": 0, 00:27:55.746 "nvme_adminq_poll_period_us": 10000, 00:27:55.746 "nvme_ioq_poll_period_us": 0, 00:27:55.746 "io_queue_requests": 0, 00:27:55.746 "delay_cmd_submit": true, 00:27:55.746 "transport_retry_count": 4, 00:27:55.746 "bdev_retry_count": 3, 00:27:55.746 "transport_ack_timeout": 0, 00:27:55.746 "ctrlr_loss_timeout_sec": 0, 00:27:55.746 "reconnect_delay_sec": 0, 00:27:55.746 "fast_io_fail_timeout_sec": 0, 00:27:55.746 "disable_auto_failback": false, 00:27:55.746 "generate_uuids": false, 00:27:55.746 "transport_tos": 0, 00:27:55.746 "nvme_error_stat": false, 00:27:55.746 "rdma_srq_size": 0, 00:27:55.746 "io_path_stat": false, 00:27:55.746 "allow_accel_sequence": false, 00:27:55.746 "rdma_max_cq_size": 0, 00:27:55.746 "rdma_cm_event_timeout_ms": 0, 00:27:55.746 "dhchap_digests": [ 00:27:55.746 "sha256", 00:27:55.746 "sha384", 00:27:55.746 "sha512" 00:27:55.746 ], 00:27:55.746 "dhchap_dhgroups": [ 00:27:55.746 "null", 00:27:55.746 "ffdhe2048", 00:27:55.746 "ffdhe3072", 00:27:55.746 "ffdhe4096", 00:27:55.746 "ffdhe6144", 00:27:55.746 "ffdhe8192" 00:27:55.746 ] 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "bdev_nvme_set_hotplug", 00:27:55.746 "params": { 00:27:55.746 "period_us": 100000, 00:27:55.746 "enable": false 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "bdev_wait_for_examine" 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "scsi", 00:27:55.746 "config": null 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "scheduler", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "framework_set_scheduler", 00:27:55.746 "params": { 00:27:55.746 "name": "static" 00:27:55.746 } 00:27:55.746 } 00:27:55.746 ] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "vhost_scsi", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "vhost_blk", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "ublk", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "nbd", 00:27:55.746 "config": [] 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "subsystem": "nvmf", 00:27:55.746 "config": [ 00:27:55.746 { 00:27:55.746 "method": "nvmf_set_config", 00:27:55.746 "params": { 00:27:55.746 "discovery_filter": "match_any", 00:27:55.746 "admin_cmd_passthru": { 00:27:55.746 "identify_ctrlr": false 00:27:55.746 }, 00:27:55.746 "dhchap_digests": [ 00:27:55.746 "sha256", 00:27:55.746 "sha384", 00:27:55.746 "sha512" 00:27:55.746 ], 00:27:55.746 "dhchap_dhgroups": [ 00:27:55.746 "null", 00:27:55.746 "ffdhe2048", 00:27:55.746 "ffdhe3072", 00:27:55.746 "ffdhe4096", 00:27:55.746 "ffdhe6144", 00:27:55.746 "ffdhe8192" 00:27:55.746 ] 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "nvmf_set_max_subsystems", 00:27:55.746 "params": { 00:27:55.746 "max_subsystems": 1024 00:27:55.746 } 00:27:55.746 }, 00:27:55.746 { 00:27:55.746 "method": "nvmf_set_crdt", 00:27:55.746 "params": { 00:27:55.746 "crdt1": 0, 00:27:55.746 "crdt2": 0, 00:27:55.747 "crdt3": 0 00:27:55.747 } 00:27:55.747 }, 00:27:55.747 { 00:27:55.747 "method": "nvmf_create_transport", 00:27:55.747 "params": { 00:27:55.747 "trtype": "TCP", 00:27:55.747 "max_queue_depth": 128, 00:27:55.747 "max_io_qpairs_per_ctrlr": 127, 00:27:55.747 "in_capsule_data_size": 4096, 00:27:55.747 "max_io_size": 131072, 00:27:55.747 "io_unit_size": 131072, 00:27:55.747 "max_aq_depth": 128, 00:27:55.747 "num_shared_buffers": 511, 00:27:55.747 "buf_cache_size": 4294967295, 00:27:55.747 "dif_insert_or_strip": false, 00:27:55.747 "zcopy": false, 00:27:55.747 "c2h_success": true, 00:27:55.747 "sock_priority": 0, 00:27:55.747 "abort_timeout_sec": 1, 00:27:55.747 "ack_timeout": 0, 00:27:55.747 "data_wr_pool_size": 0 00:27:55.747 } 00:27:55.747 } 00:27:55.747 ] 00:27:55.747 }, 00:27:55.747 { 00:27:55.747 "subsystem": "iscsi", 00:27:55.747 "config": [ 00:27:55.747 { 00:27:55.747 "method": "iscsi_set_options", 00:27:55.747 "params": { 00:27:55.747 "node_base": "iqn.2016-06.io.spdk", 00:27:55.747 "max_sessions": 128, 00:27:55.747 "max_connections_per_session": 2, 00:27:55.747 "max_queue_depth": 64, 00:27:55.747 "default_time2wait": 2, 00:27:55.747 "default_time2retain": 20, 00:27:55.747 "first_burst_length": 8192, 00:27:55.747 "immediate_data": true, 00:27:55.747 "allow_duplicated_isid": false, 00:27:55.747 "error_recovery_level": 0, 00:27:55.747 "nop_timeout": 60, 00:27:55.747 "nop_in_interval": 30, 00:27:55.747 "disable_chap": false, 00:27:55.747 "require_chap": false, 00:27:55.747 "mutual_chap": false, 00:27:55.747 "chap_group": 0, 00:27:55.747 "max_large_datain_per_connection": 64, 00:27:55.747 "max_r2t_per_connection": 4, 00:27:55.747 "pdu_pool_size": 36864, 00:27:55.747 "immediate_data_pool_size": 16384, 00:27:55.747 "data_out_pool_size": 2048 00:27:55.747 } 00:27:55.747 } 00:27:55.747 ] 00:27:55.747 } 00:27:55.747 ] 00:27:55.747 } 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69604 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69604 ']' 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69604 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69604 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:55.747 killing process with pid 69604 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69604' 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69604 00:27:55.747 13:57:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69604 00:27:56.312 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69638 00:27:56.312 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:27:56.312 13:57:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69638 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69638 ']' 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69638 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69638 00:28:01.578 killing process with pid 69638 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69638' 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69638 00:28:01.578 13:57:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69638 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:28:01.837 00:28:01.837 real 0m7.286s 00:28:01.837 user 0m6.858s 00:28:01.837 sys 0m0.938s 00:28:01.837 ************************************ 00:28:01.837 END TEST skip_rpc_with_json 00:28:01.837 ************************************ 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:01.837 13:57:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:28:01.837 13:57:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:01.837 13:57:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.837 13:57:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:01.837 ************************************ 00:28:01.837 START TEST skip_rpc_with_delay 00:28:01.837 ************************************ 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:28:01.837 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:01.837 [2024-10-09 13:57:08.329062] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:28:01.837 [2024-10-09 13:57:08.329274] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:28:02.095 ************************************ 00:28:02.095 END TEST skip_rpc_with_delay 00:28:02.095 ************************************ 00:28:02.095 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:28:02.095 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:02.095 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:02.096 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:02.096 00:28:02.096 real 0m0.206s 00:28:02.096 user 0m0.092s 00:28:02.096 sys 0m0.112s 00:28:02.096 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.096 13:57:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:28:02.096 13:57:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:28:02.096 13:57:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:28:02.096 13:57:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:28:02.096 13:57:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:02.096 13:57:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.096 13:57:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.096 ************************************ 00:28:02.096 START TEST exit_on_failed_rpc_init 00:28:02.096 ************************************ 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69750 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69750 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69750 ']' 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.096 13:57:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:02.096 [2024-10-09 13:57:08.565105] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:02.096 [2024-10-09 13:57:08.565242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:28:02.354 [2024-10-09 13:57:08.721645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.354 [2024-10-09 13:57:08.769732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:28:02.920 13:57:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:28:03.178 [2024-10-09 13:57:09.596331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:03.178 [2024-10-09 13:57:09.596519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69768 ] 00:28:03.436 [2024-10-09 13:57:09.785651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.436 [2024-10-09 13:57:09.843282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.436 [2024-10-09 13:57:09.843421] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:03.436 [2024-10-09 13:57:09.843451] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:03.436 [2024-10-09 13:57:09.843473] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69750 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69750 ']' 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69750 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69750 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.695 killing process with pid 69750 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69750' 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69750 00:28:03.695 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69750 00:28:03.954 00:28:03.954 real 0m1.978s 00:28:03.954 user 0m2.206s 00:28:03.954 sys 0m0.616s 00:28:03.954 ************************************ 00:28:03.954 END TEST exit_on_failed_rpc_init 00:28:03.954 ************************************ 00:28:03.954 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:03.954 13:57:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:03.954 13:57:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:28:03.954 00:28:03.954 real 0m15.429s 00:28:03.954 user 0m14.350s 00:28:03.954 sys 0m2.345s 00:28:03.954 13:57:10 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.213 13:57:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 ************************************ 00:28:04.213 END TEST skip_rpc 00:28:04.213 ************************************ 00:28:04.213 13:57:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:28:04.213 13:57:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.213 13:57:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.213 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 ************************************ 00:28:04.213 START TEST rpc_client 00:28:04.213 ************************************ 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:28:04.213 * Looking for test storage... 00:28:04.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.213 13:57:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.213 --rc genhtml_branch_coverage=1 00:28:04.213 --rc genhtml_function_coverage=1 00:28:04.213 --rc genhtml_legend=1 00:28:04.213 --rc geninfo_all_blocks=1 00:28:04.213 --rc geninfo_unexecuted_blocks=1 00:28:04.213 00:28:04.213 ' 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.213 --rc genhtml_branch_coverage=1 00:28:04.213 --rc genhtml_function_coverage=1 00:28:04.213 --rc genhtml_legend=1 00:28:04.213 --rc geninfo_all_blocks=1 00:28:04.213 --rc geninfo_unexecuted_blocks=1 00:28:04.213 00:28:04.213 ' 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.213 --rc genhtml_branch_coverage=1 00:28:04.213 --rc genhtml_function_coverage=1 00:28:04.213 --rc genhtml_legend=1 00:28:04.213 --rc geninfo_all_blocks=1 00:28:04.213 --rc geninfo_unexecuted_blocks=1 00:28:04.213 00:28:04.213 ' 00:28:04.213 13:57:10 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:04.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.213 --rc genhtml_branch_coverage=1 00:28:04.213 --rc genhtml_function_coverage=1 00:28:04.213 --rc genhtml_legend=1 00:28:04.213 --rc geninfo_all_blocks=1 00:28:04.213 --rc geninfo_unexecuted_blocks=1 00:28:04.213 00:28:04.213 ' 00:28:04.213 13:57:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:28:04.472 OK 00:28:04.472 13:57:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:28:04.472 00:28:04.472 real 0m0.265s 00:28:04.472 user 0m0.149s 00:28:04.472 sys 0m0.129s 00:28:04.472 13:57:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.472 ************************************ 00:28:04.472 END TEST rpc_client 00:28:04.472 13:57:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:28:04.472 ************************************ 00:28:04.472 13:57:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:28:04.472 13:57:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.472 13:57:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.472 13:57:10 -- common/autotest_common.sh@10 -- # set +x 00:28:04.472 ************************************ 00:28:04.472 START TEST json_config 00:28:04.472 ************************************ 00:28:04.472 13:57:10 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:28:04.472 13:57:10 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:04.472 13:57:10 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:04.472 13:57:10 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.732 13:57:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.732 13:57:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.732 13:57:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.732 13:57:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.732 13:57:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.732 13:57:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:28:04.732 13:57:11 json_config -- scripts/common.sh@345 -- # : 1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.732 13:57:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.732 13:57:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@353 -- # local d=1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.732 13:57:11 json_config -- scripts/common.sh@355 -- # echo 1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.732 13:57:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@353 -- # local d=2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.732 13:57:11 json_config -- scripts/common.sh@355 -- # echo 2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.732 13:57:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.732 13:57:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.732 13:57:11 json_config -- scripts/common.sh@368 -- # return 0 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.732 --rc genhtml_branch_coverage=1 00:28:04.732 --rc genhtml_function_coverage=1 00:28:04.732 --rc genhtml_legend=1 00:28:04.732 --rc geninfo_all_blocks=1 00:28:04.732 --rc geninfo_unexecuted_blocks=1 00:28:04.732 00:28:04.732 ' 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.732 --rc genhtml_branch_coverage=1 00:28:04.732 --rc genhtml_function_coverage=1 00:28:04.732 --rc genhtml_legend=1 00:28:04.732 --rc geninfo_all_blocks=1 00:28:04.732 --rc geninfo_unexecuted_blocks=1 00:28:04.732 00:28:04.732 ' 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.732 --rc genhtml_branch_coverage=1 00:28:04.732 --rc genhtml_function_coverage=1 00:28:04.732 --rc genhtml_legend=1 00:28:04.732 --rc geninfo_all_blocks=1 00:28:04.732 --rc geninfo_unexecuted_blocks=1 00:28:04.732 00:28:04.732 ' 00:28:04.732 13:57:11 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.732 --rc genhtml_branch_coverage=1 00:28:04.732 --rc genhtml_function_coverage=1 00:28:04.732 --rc genhtml_legend=1 00:28:04.732 --rc geninfo_all_blocks=1 00:28:04.732 --rc geninfo_unexecuted_blocks=1 00:28:04.733 00:28:04.733 ' 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e35b0848-66bf-4384-b956-5a01a608691e 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e35b0848-66bf-4384-b956-5a01a608691e 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:04.733 13:57:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.733 13:57:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.733 13:57:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.733 13:57:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.733 13:57:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.733 13:57:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.733 13:57:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.733 13:57:11 json_config -- paths/export.sh@5 -- # export PATH 00:28:04.733 13:57:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@51 -- # : 0 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.733 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.733 13:57:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:28:04.733 WARNING: No tests are enabled so not running JSON configuration tests 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:28:04.733 13:57:11 json_config -- json_config/json_config.sh@28 -- # exit 0 00:28:04.733 00:28:04.733 real 0m0.208s 00:28:04.733 user 0m0.124s 00:28:04.733 sys 0m0.089s 00:28:04.733 13:57:11 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:04.733 13:57:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:04.733 ************************************ 00:28:04.733 END TEST json_config 00:28:04.733 ************************************ 00:28:04.733 13:57:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:28:04.733 13:57:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:04.733 13:57:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:04.733 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:28:04.733 ************************************ 00:28:04.733 START TEST json_config_extra_key 00:28:04.733 ************************************ 00:28:04.733 13:57:11 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:28:04.733 13:57:11 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:04.733 13:57:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:28:04.733 13:57:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:04.992 13:57:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.992 13:57:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:28:04.992 13:57:11 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.992 13:57:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:04.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.992 --rc genhtml_branch_coverage=1 00:28:04.992 --rc genhtml_function_coverage=1 00:28:04.992 --rc genhtml_legend=1 00:28:04.992 --rc geninfo_all_blocks=1 00:28:04.992 --rc geninfo_unexecuted_blocks=1 00:28:04.992 00:28:04.993 ' 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:04.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.993 --rc genhtml_branch_coverage=1 00:28:04.993 --rc genhtml_function_coverage=1 00:28:04.993 --rc genhtml_legend=1 00:28:04.993 --rc geninfo_all_blocks=1 00:28:04.993 --rc geninfo_unexecuted_blocks=1 00:28:04.993 00:28:04.993 ' 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:04.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.993 --rc genhtml_branch_coverage=1 00:28:04.993 --rc genhtml_function_coverage=1 00:28:04.993 --rc genhtml_legend=1 00:28:04.993 --rc geninfo_all_blocks=1 00:28:04.993 --rc geninfo_unexecuted_blocks=1 00:28:04.993 00:28:04.993 ' 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:04.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.993 --rc genhtml_branch_coverage=1 00:28:04.993 --rc genhtml_function_coverage=1 00:28:04.993 --rc genhtml_legend=1 00:28:04.993 --rc geninfo_all_blocks=1 00:28:04.993 --rc geninfo_unexecuted_blocks=1 00:28:04.993 00:28:04.993 ' 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e35b0848-66bf-4384-b956-5a01a608691e 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e35b0848-66bf-4384-b956-5a01a608691e 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:04.993 13:57:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.993 13:57:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.993 13:57:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.993 13:57:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.993 13:57:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.993 13:57:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.993 13:57:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.993 13:57:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:28:04.993 13:57:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:04.993 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.993 13:57:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:28:04.993 INFO: launching applications... 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:28:04.993 13:57:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69956 00:28:04.993 Waiting for target to run... 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69956 /var/tmp/spdk_tgt.sock 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69956 ']' 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:04.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:04.993 13:57:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:28:04.993 13:57:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:28:04.993 [2024-10-09 13:57:11.523529] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:04.993 [2024-10-09 13:57:11.523752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69956 ] 00:28:05.560 [2024-10-09 13:57:11.944904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.561 [2024-10-09 13:57:11.981506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.128 13:57:12 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.128 13:57:12 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:28:06.128 00:28:06.128 INFO: shutting down applications... 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:28:06.128 13:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:28:06.128 13:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69956 ]] 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69956 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69956 00:28:06.128 13:57:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69956 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:28:06.694 SPDK target shutdown done 00:28:06.694 13:57:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:28:06.694 Success 00:28:06.694 13:57:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:28:06.694 00:28:06.694 real 0m1.821s 00:28:06.694 user 0m1.635s 00:28:06.694 sys 0m0.575s 00:28:06.694 13:57:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.694 13:57:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:28:06.694 ************************************ 00:28:06.694 END TEST json_config_extra_key 00:28:06.694 ************************************ 00:28:06.694 13:57:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:28:06.694 13:57:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:06.694 13:57:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:06.694 13:57:13 -- common/autotest_common.sh@10 -- # set +x 00:28:06.694 ************************************ 00:28:06.694 START TEST alias_rpc 00:28:06.694 ************************************ 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:28:06.694 * Looking for test storage... 00:28:06.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.694 13:57:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.694 --rc genhtml_branch_coverage=1 00:28:06.694 --rc genhtml_function_coverage=1 00:28:06.694 --rc genhtml_legend=1 00:28:06.694 --rc geninfo_all_blocks=1 00:28:06.694 --rc geninfo_unexecuted_blocks=1 00:28:06.694 00:28:06.694 ' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.694 --rc genhtml_branch_coverage=1 00:28:06.694 --rc genhtml_function_coverage=1 00:28:06.694 --rc genhtml_legend=1 00:28:06.694 --rc geninfo_all_blocks=1 00:28:06.694 --rc geninfo_unexecuted_blocks=1 00:28:06.694 00:28:06.694 ' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.694 --rc genhtml_branch_coverage=1 00:28:06.694 --rc genhtml_function_coverage=1 00:28:06.694 --rc genhtml_legend=1 00:28:06.694 --rc geninfo_all_blocks=1 00:28:06.694 --rc geninfo_unexecuted_blocks=1 00:28:06.694 00:28:06.694 ' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:06.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.694 --rc genhtml_branch_coverage=1 00:28:06.694 --rc genhtml_function_coverage=1 00:28:06.694 --rc genhtml_legend=1 00:28:06.694 --rc geninfo_all_blocks=1 00:28:06.694 --rc geninfo_unexecuted_blocks=1 00:28:06.694 00:28:06.694 ' 00:28:06.694 13:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:28:06.694 13:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70029 00:28:06.694 13:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.694 13:57:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70029 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70029 ']' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.694 13:57:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:06.952 [2024-10-09 13:57:13.365045] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:06.952 [2024-10-09 13:57:13.365239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70029 ] 00:28:07.211 [2024-10-09 13:57:13.549186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.211 [2024-10-09 13:57:13.609718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.779 13:57:14 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.779 13:57:14 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:28:07.779 13:57:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:28:08.037 13:57:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70029 00:28:08.037 13:57:14 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70029 ']' 00:28:08.037 13:57:14 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70029 00:28:08.037 13:57:14 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:28:08.037 13:57:14 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.037 13:57:14 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70029 00:28:08.296 13:57:14 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.296 13:57:14 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.296 killing process with pid 70029 00:28:08.296 13:57:14 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70029' 00:28:08.296 13:57:14 alias_rpc -- common/autotest_common.sh@969 -- # kill 70029 00:28:08.296 13:57:14 alias_rpc -- common/autotest_common.sh@974 -- # wait 70029 00:28:08.554 00:28:08.554 real 0m1.991s 00:28:08.554 user 0m2.113s 00:28:08.554 sys 0m0.595s 00:28:08.554 13:57:15 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:08.554 13:57:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:08.554 ************************************ 00:28:08.554 END TEST alias_rpc 00:28:08.554 ************************************ 00:28:08.554 13:57:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:28:08.554 13:57:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:28:08.554 13:57:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:08.554 13:57:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:08.554 13:57:15 -- common/autotest_common.sh@10 -- # set +x 00:28:08.554 ************************************ 00:28:08.554 START TEST spdkcli_tcp 00:28:08.554 ************************************ 00:28:08.554 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:28:08.813 * Looking for test storage... 00:28:08.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.813 13:57:15 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.813 --rc genhtml_branch_coverage=1 00:28:08.813 --rc genhtml_function_coverage=1 00:28:08.813 --rc genhtml_legend=1 00:28:08.813 --rc geninfo_all_blocks=1 00:28:08.813 --rc geninfo_unexecuted_blocks=1 00:28:08.813 00:28:08.813 ' 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.813 --rc genhtml_branch_coverage=1 00:28:08.813 --rc genhtml_function_coverage=1 00:28:08.813 --rc genhtml_legend=1 00:28:08.813 --rc geninfo_all_blocks=1 00:28:08.813 --rc geninfo_unexecuted_blocks=1 00:28:08.813 00:28:08.813 ' 00:28:08.813 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:08.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.813 --rc genhtml_branch_coverage=1 00:28:08.813 --rc genhtml_function_coverage=1 00:28:08.813 --rc genhtml_legend=1 00:28:08.813 --rc geninfo_all_blocks=1 00:28:08.813 --rc geninfo_unexecuted_blocks=1 00:28:08.813 00:28:08.813 ' 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:08.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.814 --rc genhtml_branch_coverage=1 00:28:08.814 --rc genhtml_function_coverage=1 00:28:08.814 --rc genhtml_legend=1 00:28:08.814 --rc geninfo_all_blocks=1 00:28:08.814 --rc geninfo_unexecuted_blocks=1 00:28:08.814 00:28:08.814 ' 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:08.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70114 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70114 00:28:08.814 13:57:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70114 ']' 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.814 13:57:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.072 [2024-10-09 13:57:15.415189] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:09.072 [2024-10-09 13:57:15.415382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ] 00:28:09.072 [2024-10-09 13:57:15.593412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:09.331 [2024-10-09 13:57:15.640856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.332 [2024-10-09 13:57:15.640951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.924 13:57:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:09.924 13:57:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:28:09.924 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70131 00:28:09.924 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:28:09.924 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:28:10.186 [ 00:28:10.186 "bdev_malloc_delete", 00:28:10.186 "bdev_malloc_create", 00:28:10.186 "bdev_null_resize", 00:28:10.186 "bdev_null_delete", 00:28:10.186 "bdev_null_create", 00:28:10.186 "bdev_nvme_cuse_unregister", 00:28:10.186 "bdev_nvme_cuse_register", 00:28:10.186 "bdev_opal_new_user", 00:28:10.186 "bdev_opal_set_lock_state", 00:28:10.186 "bdev_opal_delete", 00:28:10.186 "bdev_opal_get_info", 00:28:10.186 "bdev_opal_create", 00:28:10.187 "bdev_nvme_opal_revert", 00:28:10.187 "bdev_nvme_opal_init", 00:28:10.187 "bdev_nvme_send_cmd", 00:28:10.187 "bdev_nvme_set_keys", 00:28:10.187 "bdev_nvme_get_path_iostat", 00:28:10.187 "bdev_nvme_get_mdns_discovery_info", 00:28:10.187 "bdev_nvme_stop_mdns_discovery", 00:28:10.187 "bdev_nvme_start_mdns_discovery", 00:28:10.187 "bdev_nvme_set_multipath_policy", 00:28:10.187 "bdev_nvme_set_preferred_path", 00:28:10.187 "bdev_nvme_get_io_paths", 00:28:10.187 "bdev_nvme_remove_error_injection", 00:28:10.187 "bdev_nvme_add_error_injection", 00:28:10.187 "bdev_nvme_get_discovery_info", 00:28:10.187 "bdev_nvme_stop_discovery", 00:28:10.187 "bdev_nvme_start_discovery", 00:28:10.187 "bdev_nvme_get_controller_health_info", 00:28:10.187 "bdev_nvme_disable_controller", 00:28:10.187 "bdev_nvme_enable_controller", 00:28:10.187 "bdev_nvme_reset_controller", 00:28:10.187 "bdev_nvme_get_transport_statistics", 00:28:10.187 "bdev_nvme_apply_firmware", 00:28:10.187 "bdev_nvme_detach_controller", 00:28:10.187 "bdev_nvme_get_controllers", 00:28:10.187 "bdev_nvme_attach_controller", 00:28:10.187 "bdev_nvme_set_hotplug", 00:28:10.187 "bdev_nvme_set_options", 00:28:10.187 "bdev_passthru_delete", 00:28:10.187 "bdev_passthru_create", 00:28:10.187 "bdev_lvol_set_parent_bdev", 00:28:10.187 "bdev_lvol_set_parent", 00:28:10.187 "bdev_lvol_check_shallow_copy", 00:28:10.187 "bdev_lvol_start_shallow_copy", 00:28:10.187 "bdev_lvol_grow_lvstore", 00:28:10.187 "bdev_lvol_get_lvols", 00:28:10.187 "bdev_lvol_get_lvstores", 00:28:10.187 "bdev_lvol_delete", 00:28:10.187 "bdev_lvol_set_read_only", 00:28:10.187 "bdev_lvol_resize", 00:28:10.187 "bdev_lvol_decouple_parent", 00:28:10.187 "bdev_lvol_inflate", 00:28:10.187 "bdev_lvol_rename", 00:28:10.187 "bdev_lvol_clone_bdev", 00:28:10.187 "bdev_lvol_clone", 00:28:10.187 "bdev_lvol_snapshot", 00:28:10.187 "bdev_lvol_create", 00:28:10.187 "bdev_lvol_delete_lvstore", 00:28:10.187 "bdev_lvol_rename_lvstore", 00:28:10.187 "bdev_lvol_create_lvstore", 00:28:10.187 "bdev_raid_set_options", 00:28:10.187 "bdev_raid_remove_base_bdev", 00:28:10.187 "bdev_raid_add_base_bdev", 00:28:10.187 "bdev_raid_delete", 00:28:10.187 "bdev_raid_create", 00:28:10.187 "bdev_raid_get_bdevs", 00:28:10.187 "bdev_error_inject_error", 00:28:10.187 "bdev_error_delete", 00:28:10.187 "bdev_error_create", 00:28:10.187 "bdev_split_delete", 00:28:10.187 "bdev_split_create", 00:28:10.187 "bdev_delay_delete", 00:28:10.187 "bdev_delay_create", 00:28:10.187 "bdev_delay_update_latency", 00:28:10.187 "bdev_zone_block_delete", 00:28:10.187 "bdev_zone_block_create", 00:28:10.187 "blobfs_create", 00:28:10.187 "blobfs_detect", 00:28:10.187 "blobfs_set_cache_size", 00:28:10.187 "bdev_aio_delete", 00:28:10.187 "bdev_aio_rescan", 00:28:10.187 "bdev_aio_create", 00:28:10.187 "bdev_ftl_set_property", 00:28:10.187 "bdev_ftl_get_properties", 00:28:10.187 "bdev_ftl_get_stats", 00:28:10.187 "bdev_ftl_unmap", 00:28:10.187 "bdev_ftl_unload", 00:28:10.187 "bdev_ftl_delete", 00:28:10.187 "bdev_ftl_load", 00:28:10.187 "bdev_ftl_create", 00:28:10.187 "bdev_virtio_attach_controller", 00:28:10.187 "bdev_virtio_scsi_get_devices", 00:28:10.187 "bdev_virtio_detach_controller", 00:28:10.187 "bdev_virtio_blk_set_hotplug", 00:28:10.187 "bdev_iscsi_delete", 00:28:10.187 "bdev_iscsi_create", 00:28:10.187 "bdev_iscsi_set_options", 00:28:10.187 "accel_error_inject_error", 00:28:10.187 "ioat_scan_accel_module", 00:28:10.187 "dsa_scan_accel_module", 00:28:10.187 "iaa_scan_accel_module", 00:28:10.187 "keyring_file_remove_key", 00:28:10.187 "keyring_file_add_key", 00:28:10.187 "keyring_linux_set_options", 00:28:10.187 "fsdev_aio_delete", 00:28:10.187 "fsdev_aio_create", 00:28:10.187 "iscsi_get_histogram", 00:28:10.187 "iscsi_enable_histogram", 00:28:10.187 "iscsi_set_options", 00:28:10.187 "iscsi_get_auth_groups", 00:28:10.187 "iscsi_auth_group_remove_secret", 00:28:10.187 "iscsi_auth_group_add_secret", 00:28:10.187 "iscsi_delete_auth_group", 00:28:10.187 "iscsi_create_auth_group", 00:28:10.187 "iscsi_set_discovery_auth", 00:28:10.187 "iscsi_get_options", 00:28:10.187 "iscsi_target_node_request_logout", 00:28:10.187 "iscsi_target_node_set_redirect", 00:28:10.187 "iscsi_target_node_set_auth", 00:28:10.187 "iscsi_target_node_add_lun", 00:28:10.187 "iscsi_get_stats", 00:28:10.187 "iscsi_get_connections", 00:28:10.187 "iscsi_portal_group_set_auth", 00:28:10.187 "iscsi_start_portal_group", 00:28:10.187 "iscsi_delete_portal_group", 00:28:10.187 "iscsi_create_portal_group", 00:28:10.187 "iscsi_get_portal_groups", 00:28:10.187 "iscsi_delete_target_node", 00:28:10.187 "iscsi_target_node_remove_pg_ig_maps", 00:28:10.187 "iscsi_target_node_add_pg_ig_maps", 00:28:10.187 "iscsi_create_target_node", 00:28:10.187 "iscsi_get_target_nodes", 00:28:10.187 "iscsi_delete_initiator_group", 00:28:10.187 "iscsi_initiator_group_remove_initiators", 00:28:10.187 "iscsi_initiator_group_add_initiators", 00:28:10.187 "iscsi_create_initiator_group", 00:28:10.187 "iscsi_get_initiator_groups", 00:28:10.187 "nvmf_set_crdt", 00:28:10.187 "nvmf_set_config", 00:28:10.187 "nvmf_set_max_subsystems", 00:28:10.187 "nvmf_stop_mdns_prr", 00:28:10.187 "nvmf_publish_mdns_prr", 00:28:10.187 "nvmf_subsystem_get_listeners", 00:28:10.187 "nvmf_subsystem_get_qpairs", 00:28:10.187 "nvmf_subsystem_get_controllers", 00:28:10.187 "nvmf_get_stats", 00:28:10.187 "nvmf_get_transports", 00:28:10.187 "nvmf_create_transport", 00:28:10.187 "nvmf_get_targets", 00:28:10.187 "nvmf_delete_target", 00:28:10.187 "nvmf_create_target", 00:28:10.187 "nvmf_subsystem_allow_any_host", 00:28:10.187 "nvmf_subsystem_set_keys", 00:28:10.187 "nvmf_subsystem_remove_host", 00:28:10.187 "nvmf_subsystem_add_host", 00:28:10.187 "nvmf_ns_remove_host", 00:28:10.187 "nvmf_ns_add_host", 00:28:10.187 "nvmf_subsystem_remove_ns", 00:28:10.187 "nvmf_subsystem_set_ns_ana_group", 00:28:10.187 "nvmf_subsystem_add_ns", 00:28:10.187 "nvmf_subsystem_listener_set_ana_state", 00:28:10.187 "nvmf_discovery_get_referrals", 00:28:10.187 "nvmf_discovery_remove_referral", 00:28:10.187 "nvmf_discovery_add_referral", 00:28:10.187 "nvmf_subsystem_remove_listener", 00:28:10.187 "nvmf_subsystem_add_listener", 00:28:10.187 "nvmf_delete_subsystem", 00:28:10.187 "nvmf_create_subsystem", 00:28:10.187 "nvmf_get_subsystems", 00:28:10.187 "env_dpdk_get_mem_stats", 00:28:10.187 "nbd_get_disks", 00:28:10.187 "nbd_stop_disk", 00:28:10.187 "nbd_start_disk", 00:28:10.187 "ublk_recover_disk", 00:28:10.187 "ublk_get_disks", 00:28:10.187 "ublk_stop_disk", 00:28:10.187 "ublk_start_disk", 00:28:10.187 "ublk_destroy_target", 00:28:10.187 "ublk_create_target", 00:28:10.187 "virtio_blk_create_transport", 00:28:10.187 "virtio_blk_get_transports", 00:28:10.187 "vhost_controller_set_coalescing", 00:28:10.187 "vhost_get_controllers", 00:28:10.187 "vhost_delete_controller", 00:28:10.187 "vhost_create_blk_controller", 00:28:10.187 "vhost_scsi_controller_remove_target", 00:28:10.187 "vhost_scsi_controller_add_target", 00:28:10.187 "vhost_start_scsi_controller", 00:28:10.187 "vhost_create_scsi_controller", 00:28:10.187 "thread_set_cpumask", 00:28:10.187 "scheduler_set_options", 00:28:10.187 "framework_get_governor", 00:28:10.187 "framework_get_scheduler", 00:28:10.187 "framework_set_scheduler", 00:28:10.187 "framework_get_reactors", 00:28:10.187 "thread_get_io_channels", 00:28:10.187 "thread_get_pollers", 00:28:10.187 "thread_get_stats", 00:28:10.187 "framework_monitor_context_switch", 00:28:10.187 "spdk_kill_instance", 00:28:10.187 "log_enable_timestamps", 00:28:10.187 "log_get_flags", 00:28:10.187 "log_clear_flag", 00:28:10.187 "log_set_flag", 00:28:10.187 "log_get_level", 00:28:10.187 "log_set_level", 00:28:10.187 "log_get_print_level", 00:28:10.187 "log_set_print_level", 00:28:10.187 "framework_enable_cpumask_locks", 00:28:10.187 "framework_disable_cpumask_locks", 00:28:10.187 "framework_wait_init", 00:28:10.187 "framework_start_init", 00:28:10.187 "scsi_get_devices", 00:28:10.187 "bdev_get_histogram", 00:28:10.187 "bdev_enable_histogram", 00:28:10.187 "bdev_set_qos_limit", 00:28:10.187 "bdev_set_qd_sampling_period", 00:28:10.187 "bdev_get_bdevs", 00:28:10.187 "bdev_reset_iostat", 00:28:10.187 "bdev_get_iostat", 00:28:10.187 "bdev_examine", 00:28:10.187 "bdev_wait_for_examine", 00:28:10.187 "bdev_set_options", 00:28:10.187 "accel_get_stats", 00:28:10.187 "accel_set_options", 00:28:10.187 "accel_set_driver", 00:28:10.187 "accel_crypto_key_destroy", 00:28:10.187 "accel_crypto_keys_get", 00:28:10.187 "accel_crypto_key_create", 00:28:10.187 "accel_assign_opc", 00:28:10.187 "accel_get_module_info", 00:28:10.187 "accel_get_opc_assignments", 00:28:10.187 "vmd_rescan", 00:28:10.187 "vmd_remove_device", 00:28:10.187 "vmd_enable", 00:28:10.187 "sock_get_default_impl", 00:28:10.187 "sock_set_default_impl", 00:28:10.187 "sock_impl_set_options", 00:28:10.187 "sock_impl_get_options", 00:28:10.188 "iobuf_get_stats", 00:28:10.188 "iobuf_set_options", 00:28:10.188 "keyring_get_keys", 00:28:10.188 "framework_get_pci_devices", 00:28:10.188 "framework_get_config", 00:28:10.188 "framework_get_subsystems", 00:28:10.188 "fsdev_set_opts", 00:28:10.188 "fsdev_get_opts", 00:28:10.188 "trace_get_info", 00:28:10.188 "trace_get_tpoint_group_mask", 00:28:10.188 "trace_disable_tpoint_group", 00:28:10.188 "trace_enable_tpoint_group", 00:28:10.188 "trace_clear_tpoint_mask", 00:28:10.188 "trace_set_tpoint_mask", 00:28:10.188 "notify_get_notifications", 00:28:10.188 "notify_get_types", 00:28:10.188 "spdk_get_version", 00:28:10.188 "rpc_get_methods" 00:28:10.188 ] 00:28:10.188 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.188 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:28:10.188 13:57:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70114 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70114 ']' 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70114 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70114 00:28:10.188 killing process with pid 70114 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70114' 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70114 00:28:10.188 13:57:16 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70114 00:28:10.755 00:28:10.755 real 0m2.033s 00:28:10.755 user 0m3.495s 00:28:10.755 sys 0m0.650s 00:28:10.755 13:57:17 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.755 13:57:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:10.755 ************************************ 00:28:10.755 END TEST spdkcli_tcp 00:28:10.755 ************************************ 00:28:10.755 13:57:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:28:10.755 13:57:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:10.755 13:57:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:10.755 13:57:17 -- common/autotest_common.sh@10 -- # set +x 00:28:10.755 ************************************ 00:28:10.755 START TEST dpdk_mem_utility 00:28:10.755 ************************************ 00:28:10.755 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:28:10.755 * Looking for test storage... 00:28:10.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:28:10.755 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:10.755 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:10.755 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.014 13:57:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:11.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.014 --rc genhtml_branch_coverage=1 00:28:11.014 --rc genhtml_function_coverage=1 00:28:11.014 --rc genhtml_legend=1 00:28:11.014 --rc geninfo_all_blocks=1 00:28:11.014 --rc geninfo_unexecuted_blocks=1 00:28:11.014 00:28:11.014 ' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:11.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.014 --rc genhtml_branch_coverage=1 00:28:11.014 --rc genhtml_function_coverage=1 00:28:11.014 --rc genhtml_legend=1 00:28:11.014 --rc geninfo_all_blocks=1 00:28:11.014 --rc geninfo_unexecuted_blocks=1 00:28:11.014 00:28:11.014 ' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:11.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.014 --rc genhtml_branch_coverage=1 00:28:11.014 --rc genhtml_function_coverage=1 00:28:11.014 --rc genhtml_legend=1 00:28:11.014 --rc geninfo_all_blocks=1 00:28:11.014 --rc geninfo_unexecuted_blocks=1 00:28:11.014 00:28:11.014 ' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:11.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.014 --rc genhtml_branch_coverage=1 00:28:11.014 --rc genhtml_function_coverage=1 00:28:11.014 --rc genhtml_legend=1 00:28:11.014 --rc geninfo_all_blocks=1 00:28:11.014 --rc geninfo_unexecuted_blocks=1 00:28:11.014 00:28:11.014 ' 00:28:11.014 13:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:28:11.014 13:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70214 00:28:11.014 13:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:11.014 13:57:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70214 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70214 ']' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.014 13:57:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:11.014 [2024-10-09 13:57:17.498920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:11.014 [2024-10-09 13:57:17.499170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ] 00:28:11.273 [2024-10-09 13:57:17.677528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.273 [2024-10-09 13:57:17.724670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.212 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.212 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:28:12.212 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:28:12.212 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:28:12.212 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.212 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:12.212 { 00:28:12.212 "filename": "/tmp/spdk_mem_dump.txt" 00:28:12.212 } 00:28:12.212 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.212 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:28:12.212 DPDK memory size 860.000000 MiB in 1 heap(s) 00:28:12.212 1 heaps totaling size 860.000000 MiB 00:28:12.212 size: 860.000000 MiB heap id: 0 00:28:12.212 end heaps---------- 00:28:12.212 9 mempools totaling size 642.649841 MiB 00:28:12.212 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:28:12.212 size: 158.602051 MiB name: PDU_data_out_Pool 00:28:12.212 size: 92.545471 MiB name: bdev_io_70214 00:28:12.212 size: 51.011292 MiB name: evtpool_70214 00:28:12.212 size: 50.003479 MiB name: msgpool_70214 00:28:12.212 size: 36.509338 MiB name: fsdev_io_70214 00:28:12.212 size: 21.763794 MiB name: PDU_Pool 00:28:12.212 size: 19.513306 MiB name: SCSI_TASK_Pool 00:28:12.212 size: 0.026123 MiB name: Session_Pool 00:28:12.212 end mempools------- 00:28:12.212 6 memzones totaling size 4.142822 MiB 00:28:12.212 size: 1.000366 MiB name: RG_ring_0_70214 00:28:12.212 size: 1.000366 MiB name: RG_ring_1_70214 00:28:12.212 size: 1.000366 MiB name: RG_ring_4_70214 00:28:12.212 size: 1.000366 MiB name: RG_ring_5_70214 00:28:12.212 size: 0.125366 MiB name: RG_ring_2_70214 00:28:12.212 size: 0.015991 MiB name: RG_ring_3_70214 00:28:12.212 end memzones------- 00:28:12.212 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:28:12.212 heap id: 0 total size: 860.000000 MiB number of busy elements: 300 number of free elements: 16 00:28:12.212 list of free elements. size: 13.937805 MiB 00:28:12.212 element at address: 0x200000400000 with size: 1.999512 MiB 00:28:12.212 element at address: 0x200000800000 with size: 1.996948 MiB 00:28:12.212 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:28:12.212 element at address: 0x20001be00000 with size: 0.999878 MiB 00:28:12.212 element at address: 0x200034a00000 with size: 0.994446 MiB 00:28:12.212 element at address: 0x200009600000 with size: 0.959839 MiB 00:28:12.212 element at address: 0x200015e00000 with size: 0.954285 MiB 00:28:12.212 element at address: 0x20001c000000 with size: 0.936584 MiB 00:28:12.212 element at address: 0x200000200000 with size: 0.834839 MiB 00:28:12.212 element at address: 0x20001d800000 with size: 0.568420 MiB 00:28:12.212 element at address: 0x20000d800000 with size: 0.489807 MiB 00:28:12.212 element at address: 0x200003e00000 with size: 0.488464 MiB 00:28:12.212 element at address: 0x20001c200000 with size: 0.485657 MiB 00:28:12.212 element at address: 0x200007000000 with size: 0.480469 MiB 00:28:12.212 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:28:12.212 element at address: 0x200003a00000 with size: 0.353027 MiB 00:28:12.212 list of standard malloc elements. size: 199.265503 MiB 00:28:12.212 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:28:12.212 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:28:12.212 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:28:12.212 element at address: 0x20001befff80 with size: 1.000122 MiB 00:28:12.212 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:28:12.212 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:28:12.212 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:28:12.212 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:28:12.212 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:28:12.212 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:28:12.212 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003aff880 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003affa80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003affb40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b000 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b180 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b240 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b300 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b480 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b540 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b600 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:28:12.213 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891840 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891900 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892080 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892140 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892200 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892380 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892440 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892500 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892680 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892740 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892800 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892980 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893040 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893100 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893280 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893340 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893400 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893580 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893640 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893700 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893880 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893940 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894000 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894180 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894240 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894300 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894480 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894540 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894600 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894780 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894840 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894900 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d895080 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d895140 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d895200 with size: 0.000183 MiB 00:28:12.213 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20001d895380 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20001d895440 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:28:12.214 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:28:12.214 list of memzone associated elements. size: 646.796692 MiB 00:28:12.214 element at address: 0x20001d895500 with size: 211.416748 MiB 00:28:12.214 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:28:12.214 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:28:12.214 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:28:12.214 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:28:12.214 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70214_0 00:28:12.214 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:28:12.214 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70214_0 00:28:12.214 element at address: 0x200003fff380 with size: 48.003052 MiB 00:28:12.214 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70214_0 00:28:12.214 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:28:12.214 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70214_0 00:28:12.214 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:28:12.214 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:28:12.214 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:28:12.214 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:28:12.214 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:28:12.214 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70214 00:28:12.214 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:28:12.214 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70214 00:28:12.214 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:28:12.214 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70214 00:28:12.214 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:28:12.214 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:28:12.214 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:28:12.214 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:28:12.214 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:28:12.214 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:28:12.214 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:28:12.214 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:28:12.214 element at address: 0x200003eff180 with size: 1.000488 MiB 00:28:12.214 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70214 00:28:12.214 element at address: 0x200003affc00 with size: 1.000488 MiB 00:28:12.214 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70214 00:28:12.214 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:28:12.214 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70214 00:28:12.214 element at address: 0x200034afe940 with size: 1.000488 MiB 00:28:12.214 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70214 00:28:12.214 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:28:12.214 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70214 00:28:12.214 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:28:12.214 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70214 00:28:12.214 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:28:12.214 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:28:12.214 element at address: 0x20000707b780 with size: 0.500488 MiB 00:28:12.214 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:28:12.214 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:28:12.214 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:28:12.214 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:28:12.214 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70214 00:28:12.214 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:28:12.214 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:28:12.214 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:28:12.214 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:28:12.214 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:28:12.214 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70214 00:28:12.214 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:28:12.214 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:28:12.214 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:28:12.214 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70214 00:28:12.214 element at address: 0x200003aff940 with size: 0.000305 MiB 00:28:12.214 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70214 00:28:12.214 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:28:12.214 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70214 00:28:12.214 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:28:12.214 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:28:12.214 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:28:12.214 13:57:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70214 00:28:12.214 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70214 ']' 00:28:12.214 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70214 00:28:12.214 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:28:12.214 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70214 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:12.215 killing process with pid 70214 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70214' 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70214 00:28:12.215 13:57:18 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70214 00:28:12.782 00:28:12.782 real 0m1.922s 00:28:12.782 user 0m1.994s 00:28:12.782 sys 0m0.601s 00:28:12.782 ************************************ 00:28:12.782 END TEST dpdk_mem_utility 00:28:12.782 ************************************ 00:28:12.782 13:57:19 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.782 13:57:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:28:12.782 13:57:19 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:28:12.782 13:57:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:12.782 13:57:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.782 13:57:19 -- common/autotest_common.sh@10 -- # set +x 00:28:12.782 ************************************ 00:28:12.782 START TEST event 00:28:12.782 ************************************ 00:28:12.782 13:57:19 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:28:12.782 * Looking for test storage... 00:28:12.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:12.782 13:57:19 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:12.782 13:57:19 event -- common/autotest_common.sh@1681 -- # lcov --version 00:28:12.782 13:57:19 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:12.782 13:57:19 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:12.782 13:57:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.782 13:57:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.782 13:57:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.782 13:57:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.782 13:57:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.782 13:57:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.782 13:57:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.782 13:57:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.782 13:57:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.782 13:57:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.782 13:57:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.782 13:57:19 event -- scripts/common.sh@344 -- # case "$op" in 00:28:12.782 13:57:19 event -- scripts/common.sh@345 -- # : 1 00:28:12.782 13:57:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.782 13:57:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.041 13:57:19 event -- scripts/common.sh@365 -- # decimal 1 00:28:13.041 13:57:19 event -- scripts/common.sh@353 -- # local d=1 00:28:13.041 13:57:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.041 13:57:19 event -- scripts/common.sh@355 -- # echo 1 00:28:13.041 13:57:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.041 13:57:19 event -- scripts/common.sh@366 -- # decimal 2 00:28:13.041 13:57:19 event -- scripts/common.sh@353 -- # local d=2 00:28:13.041 13:57:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.041 13:57:19 event -- scripts/common.sh@355 -- # echo 2 00:28:13.041 13:57:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.041 13:57:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.041 13:57:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.041 13:57:19 event -- scripts/common.sh@368 -- # return 0 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.041 --rc genhtml_branch_coverage=1 00:28:13.041 --rc genhtml_function_coverage=1 00:28:13.041 --rc genhtml_legend=1 00:28:13.041 --rc geninfo_all_blocks=1 00:28:13.041 --rc geninfo_unexecuted_blocks=1 00:28:13.041 00:28:13.041 ' 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.041 --rc genhtml_branch_coverage=1 00:28:13.041 --rc genhtml_function_coverage=1 00:28:13.041 --rc genhtml_legend=1 00:28:13.041 --rc geninfo_all_blocks=1 00:28:13.041 --rc geninfo_unexecuted_blocks=1 00:28:13.041 00:28:13.041 ' 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.041 --rc genhtml_branch_coverage=1 00:28:13.041 --rc genhtml_function_coverage=1 00:28:13.041 --rc genhtml_legend=1 00:28:13.041 --rc geninfo_all_blocks=1 00:28:13.041 --rc geninfo_unexecuted_blocks=1 00:28:13.041 00:28:13.041 ' 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:13.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.041 --rc genhtml_branch_coverage=1 00:28:13.041 --rc genhtml_function_coverage=1 00:28:13.041 --rc genhtml_legend=1 00:28:13.041 --rc geninfo_all_blocks=1 00:28:13.041 --rc geninfo_unexecuted_blocks=1 00:28:13.041 00:28:13.041 ' 00:28:13.041 13:57:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:13.041 13:57:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:28:13.041 13:57:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:13.041 13:57:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:13.041 13:57:19 event -- common/autotest_common.sh@10 -- # set +x 00:28:13.041 ************************************ 00:28:13.041 START TEST event_perf 00:28:13.041 ************************************ 00:28:13.041 13:57:19 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:28:13.041 Running I/O for 1 seconds...[2024-10-09 13:57:19.406943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:13.041 [2024-10-09 13:57:19.407133] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70300 ] 00:28:13.300 [2024-10-09 13:57:19.590758] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.300 [2024-10-09 13:57:19.643194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.300 [2024-10-09 13:57:19.643338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.300 [2024-10-09 13:57:19.643359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.300 [2024-10-09 13:57:19.643454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.236 Running I/O for 1 seconds... 00:28:14.237 lcore 0: 174049 00:28:14.237 lcore 1: 174048 00:28:14.237 lcore 2: 174049 00:28:14.237 lcore 3: 174049 00:28:14.237 done. 00:28:14.237 00:28:14.237 real 0m1.387s 00:28:14.237 user 0m4.115s 00:28:14.237 sys 0m0.147s 00:28:14.237 13:57:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.237 13:57:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:28:14.237 ************************************ 00:28:14.237 END TEST event_perf 00:28:14.237 ************************************ 00:28:14.495 13:57:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:28:14.495 13:57:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:14.495 13:57:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.495 13:57:20 event -- common/autotest_common.sh@10 -- # set +x 00:28:14.495 ************************************ 00:28:14.495 START TEST event_reactor 00:28:14.495 ************************************ 00:28:14.495 13:57:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:28:14.495 [2024-10-09 13:57:20.851400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:14.495 [2024-10-09 13:57:20.851587] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70340 ] 00:28:14.495 [2024-10-09 13:57:21.029902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.754 [2024-10-09 13:57:21.079531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.739 test_start 00:28:15.739 oneshot 00:28:15.739 tick 100 00:28:15.739 tick 100 00:28:15.739 tick 250 00:28:15.739 tick 100 00:28:15.739 tick 100 00:28:15.739 tick 100 00:28:15.739 tick 250 00:28:15.739 tick 500 00:28:15.739 tick 100 00:28:15.739 tick 100 00:28:15.739 tick 250 00:28:15.739 tick 100 00:28:15.739 tick 100 00:28:15.739 test_end 00:28:15.739 00:28:15.739 real 0m1.369s 00:28:15.739 user 0m1.147s 00:28:15.739 sys 0m0.113s 00:28:15.739 13:57:22 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.739 13:57:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:28:15.739 ************************************ 00:28:15.739 END TEST event_reactor 00:28:15.739 ************************************ 00:28:15.739 13:57:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:28:15.739 13:57:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:15.739 13:57:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.739 13:57:22 event -- common/autotest_common.sh@10 -- # set +x 00:28:15.739 ************************************ 00:28:15.739 START TEST event_reactor_perf 00:28:15.739 ************************************ 00:28:15.739 13:57:22 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:28:15.739 [2024-10-09 13:57:22.283294] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:15.739 [2024-10-09 13:57:22.283993] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70371 ] 00:28:15.997 [2024-10-09 13:57:22.462282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.997 [2024-10-09 13:57:22.512026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.372 test_start 00:28:17.372 test_end 00:28:17.372 Performance: 354031 events per second 00:28:17.372 00:28:17.372 real 0m1.364s 00:28:17.372 user 0m1.140s 00:28:17.372 sys 0m0.116s 00:28:17.372 13:57:23 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.372 ************************************ 00:28:17.372 END TEST event_reactor_perf 00:28:17.372 13:57:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:28:17.372 ************************************ 00:28:17.372 13:57:23 event -- event/event.sh@49 -- # uname -s 00:28:17.372 13:57:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:28:17.372 13:57:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:28:17.372 13:57:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:17.372 13:57:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:17.372 13:57:23 event -- common/autotest_common.sh@10 -- # set +x 00:28:17.372 ************************************ 00:28:17.372 START TEST event_scheduler 00:28:17.372 ************************************ 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:28:17.372 * Looking for test storage... 00:28:17.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.372 13:57:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:17.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.372 --rc genhtml_branch_coverage=1 00:28:17.372 --rc genhtml_function_coverage=1 00:28:17.372 --rc genhtml_legend=1 00:28:17.372 --rc geninfo_all_blocks=1 00:28:17.372 --rc geninfo_unexecuted_blocks=1 00:28:17.372 00:28:17.372 ' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:17.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.372 --rc genhtml_branch_coverage=1 00:28:17.372 --rc genhtml_function_coverage=1 00:28:17.372 --rc genhtml_legend=1 00:28:17.372 --rc geninfo_all_blocks=1 00:28:17.372 --rc geninfo_unexecuted_blocks=1 00:28:17.372 00:28:17.372 ' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:17.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.372 --rc genhtml_branch_coverage=1 00:28:17.372 --rc genhtml_function_coverage=1 00:28:17.372 --rc genhtml_legend=1 00:28:17.372 --rc geninfo_all_blocks=1 00:28:17.372 --rc geninfo_unexecuted_blocks=1 00:28:17.372 00:28:17.372 ' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:17.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.372 --rc genhtml_branch_coverage=1 00:28:17.372 --rc genhtml_function_coverage=1 00:28:17.372 --rc genhtml_legend=1 00:28:17.372 --rc geninfo_all_blocks=1 00:28:17.372 --rc geninfo_unexecuted_blocks=1 00:28:17.372 00:28:17.372 ' 00:28:17.372 13:57:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:28:17.372 13:57:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70447 00:28:17.372 13:57:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:28:17.372 13:57:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70447 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70447 ']' 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.372 13:57:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:17.372 13:57:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:28:17.631 [2024-10-09 13:57:23.974193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:17.631 [2024-10-09 13:57:23.974398] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70447 ] 00:28:17.631 [2024-10-09 13:57:24.156849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.889 [2024-10-09 13:57:24.218535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.889 [2024-10-09 13:57:24.218718] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.889 [2024-10-09 13:57:24.218834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.889 [2024-10-09 13:57:24.218939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:28:18.457 13:57:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:18.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:18.457 POWER: Cannot set governor of lcore 0 to userspace 00:28:18.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:18.457 POWER: Cannot set governor of lcore 0 to performance 00:28:18.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:18.457 POWER: Cannot set governor of lcore 0 to userspace 00:28:18.457 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:28:18.457 POWER: Cannot set governor of lcore 0 to userspace 00:28:18.457 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:28:18.457 POWER: Unable to set Power Management Environment for lcore 0 00:28:18.457 [2024-10-09 13:57:24.933516] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:28:18.457 [2024-10-09 13:57:24.933547] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:28:18.457 [2024-10-09 13:57:24.933610] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:28:18.457 [2024-10-09 13:57:24.933631] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:28:18.457 [2024-10-09 13:57:24.933673] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:28:18.457 [2024-10-09 13:57:24.933686] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.457 13:57:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.457 13:57:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:18.715 [2024-10-09 13:57:25.005069] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:28:18.715 13:57:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.715 13:57:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:28:18.715 13:57:25 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:18.715 13:57:25 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.715 13:57:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 ************************************ 00:28:18.716 START TEST scheduler_create_thread 00:28:18.716 ************************************ 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 2 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 3 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 4 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 5 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 6 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 7 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 8 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 9 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 10 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.716 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:19.283 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.283 13:57:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:28:19.283 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.283 13:57:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:20.657 13:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.657 13:57:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:28:20.657 13:57:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:28:20.657 13:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.657 13:57:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:21.594 ************************************ 00:28:21.594 END TEST scheduler_create_thread 00:28:21.594 ************************************ 00:28:21.594 13:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.594 00:28:21.594 real 0m3.092s 00:28:21.594 user 0m0.014s 00:28:21.594 sys 0m0.013s 00:28:21.594 13:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:21.594 13:57:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:28:21.852 13:57:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:21.852 13:57:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70447 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70447 ']' 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70447 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70447 00:28:21.852 killing process with pid 70447 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70447' 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70447 00:28:21.852 13:57:28 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70447 00:28:22.111 [2024-10-09 13:57:28.489767] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:28:22.370 00:28:22.370 real 0m5.104s 00:28:22.370 user 0m9.658s 00:28:22.370 sys 0m0.517s 00:28:22.370 13:57:28 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:22.370 13:57:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:28:22.370 ************************************ 00:28:22.370 END TEST event_scheduler 00:28:22.370 ************************************ 00:28:22.370 13:57:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:28:22.370 13:57:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:28:22.370 13:57:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:22.370 13:57:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:22.370 13:57:28 event -- common/autotest_common.sh@10 -- # set +x 00:28:22.370 ************************************ 00:28:22.370 START TEST app_repeat 00:28:22.370 ************************************ 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70555 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:28:22.370 Process app_repeat pid: 70555 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70555' 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:22.370 spdk_app_start Round 0 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:28:22.370 13:57:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70555 /var/tmp/spdk-nbd.sock 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70555 ']' 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.370 13:57:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:22.370 [2024-10-09 13:57:28.898849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:22.370 [2024-10-09 13:57:28.899049] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70555 ] 00:28:22.629 [2024-10-09 13:57:29.074403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:22.629 [2024-10-09 13:57:29.121633] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.629 [2024-10-09 13:57:29.121742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.561 13:57:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:23.561 13:57:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:28:23.561 13:57:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:23.561 Malloc0 00:28:23.561 13:57:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:23.819 Malloc1 00:28:23.819 13:57:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:23.819 13:57:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:24.076 /dev/nbd0 00:28:24.076 13:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:24.076 13:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:24.076 1+0 records in 00:28:24.076 1+0 records out 00:28:24.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301604 s, 13.6 MB/s 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:24.076 13:57:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:24.076 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.076 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.076 13:57:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:24.332 /dev/nbd1 00:28:24.332 13:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:24.332 13:57:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:24.332 1+0 records in 00:28:24.332 1+0 records out 00:28:24.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322542 s, 12.7 MB/s 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:24.332 13:57:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:24.332 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:24.332 13:57:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:24.589 13:57:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:24.589 13:57:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:24.589 13:57:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:24.917 { 00:28:24.917 "nbd_device": "/dev/nbd0", 00:28:24.917 "bdev_name": "Malloc0" 00:28:24.917 }, 00:28:24.917 { 00:28:24.917 "nbd_device": "/dev/nbd1", 00:28:24.917 "bdev_name": "Malloc1" 00:28:24.917 } 00:28:24.917 ]' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:24.917 { 00:28:24.917 "nbd_device": "/dev/nbd0", 00:28:24.917 "bdev_name": "Malloc0" 00:28:24.917 }, 00:28:24.917 { 00:28:24.917 "nbd_device": "/dev/nbd1", 00:28:24.917 "bdev_name": "Malloc1" 00:28:24.917 } 00:28:24.917 ]' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:24.917 /dev/nbd1' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:24.917 /dev/nbd1' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:24.917 256+0 records in 00:28:24.917 256+0 records out 00:28:24.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110385 s, 95.0 MB/s 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:24.917 256+0 records in 00:28:24.917 256+0 records out 00:28:24.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264013 s, 39.7 MB/s 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:24.917 256+0 records in 00:28:24.917 256+0 records out 00:28:24.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315883 s, 33.2 MB/s 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.917 13:57:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:25.175 13:57:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:25.433 13:57:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:25.691 13:57:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:25.691 13:57:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:25.949 13:57:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:26.208 [2024-10-09 13:57:32.529238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:26.208 [2024-10-09 13:57:32.571977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.208 [2024-10-09 13:57:32.571979] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.208 [2024-10-09 13:57:32.615325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:26.208 [2024-10-09 13:57:32.615390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:29.490 13:57:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:29.490 spdk_app_start Round 1 00:28:29.490 13:57:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:28:29.490 13:57:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70555 /var/tmp/spdk-nbd.sock 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70555 ']' 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:29.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.490 13:57:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:28:29.490 13:57:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:29.490 Malloc0 00:28:29.490 13:57:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:29.748 Malloc1 00:28:29.748 13:57:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:29.748 13:57:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:30.007 /dev/nbd0 00:28:30.007 13:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:30.007 13:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:30.007 1+0 records in 00:28:30.007 1+0 records out 00:28:30.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272386 s, 15.0 MB/s 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:30.007 13:57:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:30.007 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:30.007 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:30.007 13:57:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:30.265 /dev/nbd1 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:30.265 1+0 records in 00:28:30.265 1+0 records out 00:28:30.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276266 s, 14.8 MB/s 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:30.265 13:57:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:30.265 13:57:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.266 13:57:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:30.524 13:57:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:30.524 { 00:28:30.524 "nbd_device": "/dev/nbd0", 00:28:30.524 "bdev_name": "Malloc0" 00:28:30.524 }, 00:28:30.524 { 00:28:30.524 "nbd_device": "/dev/nbd1", 00:28:30.524 "bdev_name": "Malloc1" 00:28:30.524 } 00:28:30.524 ]' 00:28:30.524 13:57:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:30.524 { 00:28:30.524 "nbd_device": "/dev/nbd0", 00:28:30.524 "bdev_name": "Malloc0" 00:28:30.524 }, 00:28:30.524 { 00:28:30.524 "nbd_device": "/dev/nbd1", 00:28:30.524 "bdev_name": "Malloc1" 00:28:30.524 } 00:28:30.524 ]' 00:28:30.524 13:57:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:30.524 /dev/nbd1' 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:30.524 /dev/nbd1' 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:30.524 256+0 records in 00:28:30.524 256+0 records out 00:28:30.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00684807 s, 153 MB/s 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:30.524 13:57:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:30.782 256+0 records in 00:28:30.782 256+0 records out 00:28:30.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204801 s, 51.2 MB/s 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:30.782 256+0 records in 00:28:30.782 256+0 records out 00:28:30.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252269 s, 41.6 MB/s 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:30.782 13:57:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.041 13:57:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.300 13:57:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.558 13:57:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:31.558 13:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:31.558 13:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:31.558 13:57:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:31.558 13:57:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:32.125 13:57:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:32.125 [2024-10-09 13:57:38.524236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:32.125 [2024-10-09 13:57:38.568570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.125 [2024-10-09 13:57:38.568597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.125 [2024-10-09 13:57:38.612037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:32.125 [2024-10-09 13:57:38.612100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:35.411 spdk_app_start Round 2 00:28:35.411 13:57:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:28:35.411 13:57:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:28:35.411 13:57:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70555 /var/tmp/spdk-nbd.sock 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70555 ']' 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:35.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:35.411 13:57:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:28:35.411 13:57:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:35.411 Malloc0 00:28:35.411 13:57:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:28:35.671 Malloc1 00:28:35.671 13:57:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:35.671 13:57:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:28:35.930 /dev/nbd0 00:28:35.930 13:57:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:35.930 13:57:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:35.930 1+0 records in 00:28:35.930 1+0 records out 00:28:35.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526765 s, 7.8 MB/s 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:35.930 13:57:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:35.930 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:35.930 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:35.930 13:57:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:28:36.189 /dev/nbd1 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:28:36.189 1+0 records in 00:28:36.189 1+0 records out 00:28:36.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399607 s, 10.3 MB/s 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:36.189 13:57:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:36.189 13:57:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:36.770 { 00:28:36.770 "nbd_device": "/dev/nbd0", 00:28:36.770 "bdev_name": "Malloc0" 00:28:36.770 }, 00:28:36.770 { 00:28:36.770 "nbd_device": "/dev/nbd1", 00:28:36.770 "bdev_name": "Malloc1" 00:28:36.770 } 00:28:36.770 ]' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:36.770 { 00:28:36.770 "nbd_device": "/dev/nbd0", 00:28:36.770 "bdev_name": "Malloc0" 00:28:36.770 }, 00:28:36.770 { 00:28:36.770 "nbd_device": "/dev/nbd1", 00:28:36.770 "bdev_name": "Malloc1" 00:28:36.770 } 00:28:36.770 ]' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:36.770 /dev/nbd1' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:36.770 /dev/nbd1' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:28:36.770 256+0 records in 00:28:36.770 256+0 records out 00:28:36.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00681585 s, 154 MB/s 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:36.770 256+0 records in 00:28:36.770 256+0 records out 00:28:36.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239663 s, 43.8 MB/s 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:36.770 256+0 records in 00:28:36.770 256+0 records out 00:28:36.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339387 s, 30.9 MB/s 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:36.770 13:57:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.771 13:57:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.037 13:57:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:37.296 13:57:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:37.554 13:57:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:37.554 13:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:37.554 13:57:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:37.554 13:57:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:28:37.554 13:57:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:28:38.121 13:57:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:28:38.121 [2024-10-09 13:57:44.543745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.121 [2024-10-09 13:57:44.594511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.121 [2024-10-09 13:57:44.594514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.121 [2024-10-09 13:57:44.639655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:28:38.121 [2024-10-09 13:57:44.639718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:28:41.422 13:57:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70555 /var/tmp/spdk-nbd.sock 00:28:41.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70555 ']' 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:28:41.422 13:57:47 event.app_repeat -- event/event.sh@39 -- # killprocess 70555 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70555 ']' 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70555 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70555 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70555' 00:28:41.422 killing process with pid 70555 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70555 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70555 00:28:41.422 spdk_app_start is called in Round 0. 00:28:41.422 Shutdown signal received, stop current app iteration 00:28:41.422 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:28:41.422 spdk_app_start is called in Round 1. 00:28:41.422 Shutdown signal received, stop current app iteration 00:28:41.422 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:28:41.422 spdk_app_start is called in Round 2. 00:28:41.422 Shutdown signal received, stop current app iteration 00:28:41.422 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:28:41.422 spdk_app_start is called in Round 3. 00:28:41.422 Shutdown signal received, stop current app iteration 00:28:41.422 13:57:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:28:41.422 13:57:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:28:41.422 00:28:41.422 real 0m19.084s 00:28:41.422 user 0m42.375s 00:28:41.422 sys 0m3.500s 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:41.422 13:57:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:28:41.422 ************************************ 00:28:41.422 END TEST app_repeat 00:28:41.422 ************************************ 00:28:41.422 13:57:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:28:41.422 13:57:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:41.422 13:57:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:41.422 13:57:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.422 13:57:47 event -- common/autotest_common.sh@10 -- # set +x 00:28:41.682 ************************************ 00:28:41.682 START TEST cpu_locks 00:28:41.682 ************************************ 00:28:41.682 13:57:47 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:28:41.682 * Looking for test storage... 00:28:41.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.682 13:57:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:41.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.682 --rc genhtml_branch_coverage=1 00:28:41.682 --rc genhtml_function_coverage=1 00:28:41.682 --rc genhtml_legend=1 00:28:41.682 --rc geninfo_all_blocks=1 00:28:41.682 --rc geninfo_unexecuted_blocks=1 00:28:41.682 00:28:41.682 ' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:41.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.682 --rc genhtml_branch_coverage=1 00:28:41.682 --rc genhtml_function_coverage=1 00:28:41.682 --rc genhtml_legend=1 00:28:41.682 --rc geninfo_all_blocks=1 00:28:41.682 --rc geninfo_unexecuted_blocks=1 00:28:41.682 00:28:41.682 ' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:41.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.682 --rc genhtml_branch_coverage=1 00:28:41.682 --rc genhtml_function_coverage=1 00:28:41.682 --rc genhtml_legend=1 00:28:41.682 --rc geninfo_all_blocks=1 00:28:41.682 --rc geninfo_unexecuted_blocks=1 00:28:41.682 00:28:41.682 ' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:41.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.682 --rc genhtml_branch_coverage=1 00:28:41.682 --rc genhtml_function_coverage=1 00:28:41.682 --rc genhtml_legend=1 00:28:41.682 --rc geninfo_all_blocks=1 00:28:41.682 --rc geninfo_unexecuted_blocks=1 00:28:41.682 00:28:41.682 ' 00:28:41.682 13:57:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:28:41.682 13:57:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:28:41.682 13:57:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:28:41.682 13:57:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:41.682 13:57:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:41.682 ************************************ 00:28:41.682 START TEST default_locks 00:28:41.682 ************************************ 00:28:41.682 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:28:41.682 13:57:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70996 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70996 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70996 ']' 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.683 13:57:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:41.941 [2024-10-09 13:57:48.338102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:41.941 [2024-10-09 13:57:48.339159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:28:42.200 [2024-10-09 13:57:48.520786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.200 [2024-10-09 13:57:48.578836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.767 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.767 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:28:42.767 13:57:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70996 00:28:42.767 13:57:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70996 00:28:42.767 13:57:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70996 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70996 ']' 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70996 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.333 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70996 00:28:43.592 killing process with pid 70996 00:28:43.592 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.592 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.592 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70996' 00:28:43.592 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70996 00:28:43.592 13:57:49 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70996 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70996 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70996 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70996 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70996 ']' 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:43.850 ERROR: process (pid: 70996) is no longer running 00:28:43.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70996) - No such process 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:43.850 00:28:43.850 real 0m2.116s 00:28:43.850 user 0m2.182s 00:28:43.850 sys 0m0.797s 00:28:43.850 ************************************ 00:28:43.850 END TEST default_locks 00:28:43.850 ************************************ 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.850 13:57:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:28:43.850 13:57:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:28:43.850 13:57:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:43.850 13:57:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:43.850 13:57:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:43.850 ************************************ 00:28:43.850 START TEST default_locks_via_rpc 00:28:43.850 ************************************ 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71055 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71055 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71055 ']' 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.850 13:57:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:44.108 [2024-10-09 13:57:50.536497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:44.109 [2024-10-09 13:57:50.536927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:28:44.366 [2024-10-09 13:57:50.718645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.366 [2024-10-09 13:57:50.766702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71055 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71055 00:28:44.933 13:57:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71055 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71055 ']' 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71055 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.502 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71055 00:28:45.760 killing process with pid 71055 00:28:45.760 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:45.760 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:45.760 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71055' 00:28:45.760 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71055 00:28:45.760 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71055 00:28:46.018 00:28:46.018 real 0m2.110s 00:28:46.018 user 0m2.178s 00:28:46.018 sys 0m0.802s 00:28:46.018 ************************************ 00:28:46.018 END TEST default_locks_via_rpc 00:28:46.018 ************************************ 00:28:46.018 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.018 13:57:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:46.018 13:57:52 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:28:46.018 13:57:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:46.018 13:57:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:46.018 13:57:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:46.018 ************************************ 00:28:46.018 START TEST non_locking_app_on_locked_coremask 00:28:46.018 ************************************ 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71112 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71112 /var/tmp/spdk.sock 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71112 ']' 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.018 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.019 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.019 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.019 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:46.019 13:57:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:46.277 [2024-10-09 13:57:52.668595] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:46.277 [2024-10-09 13:57:52.668780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71112 ] 00:28:46.535 [2024-10-09 13:57:52.850633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.535 [2024-10-09 13:57:52.901009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71128 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71128 /var/tmp/spdk2.sock 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71128 ']' 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:47.102 13:57:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:28:47.361 [2024-10-09 13:57:53.753020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:47.361 [2024-10-09 13:57:53.753240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71128 ] 00:28:47.620 [2024-10-09 13:57:53.941190] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:47.620 [2024-10-09 13:57:53.941259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.620 [2024-10-09 13:57:54.046386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.187 13:57:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.187 13:57:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:48.187 13:57:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71112 00:28:48.187 13:57:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71112 00:28:48.187 13:57:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71112 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71112 ']' 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71112 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71112 00:28:49.687 killing process with pid 71112 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71112' 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71112 00:28:49.687 13:57:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71112 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71128 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71128 ']' 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71128 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71128 00:28:50.256 killing process with pid 71128 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71128' 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71128 00:28:50.256 13:57:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71128 00:28:50.824 00:28:50.824 real 0m4.581s 00:28:50.824 user 0m5.073s 00:28:50.824 sys 0m1.544s 00:28:50.824 13:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:50.824 13:57:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:50.824 ************************************ 00:28:50.824 END TEST non_locking_app_on_locked_coremask 00:28:50.824 ************************************ 00:28:50.824 13:57:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:28:50.824 13:57:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:50.824 13:57:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.824 13:57:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:50.824 ************************************ 00:28:50.824 START TEST locking_app_on_unlocked_coremask 00:28:50.824 ************************************ 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71197 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71197 /var/tmp/spdk.sock 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71197 ']' 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:50.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:50.824 13:57:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:50.824 [2024-10-09 13:57:57.320953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:50.824 [2024-10-09 13:57:57.321147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ] 00:28:51.083 [2024-10-09 13:57:57.497699] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:28:51.083 [2024-10-09 13:57:57.497938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.083 [2024-10-09 13:57:57.544935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.650 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71213 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71213 /var/tmp/spdk2.sock 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71213 ']' 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:51.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:51.651 13:57:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:51.909 [2024-10-09 13:57:58.290079] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:51.909 [2024-10-09 13:57:58.290479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:28:52.169 [2024-10-09 13:57:58.474940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.169 [2024-10-09 13:57:58.573004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.735 13:57:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:52.736 13:57:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:52.736 13:57:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71213 00:28:52.736 13:57:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71213 00:28:52.736 13:57:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:54.110 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71197 00:28:54.110 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71197 ']' 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71197 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71197 00:28:54.111 killing process with pid 71197 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71197' 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71197 00:28:54.111 13:58:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71197 00:28:54.676 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71213 00:28:54.676 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71213 ']' 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71213 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71213 00:28:54.677 killing process with pid 71213 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71213' 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71213 00:28:54.677 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71213 00:28:55.244 00:28:55.244 real 0m4.363s 00:28:55.244 user 0m4.748s 00:28:55.244 sys 0m1.475s 00:28:55.244 ************************************ 00:28:55.244 END TEST locking_app_on_unlocked_coremask 00:28:55.244 ************************************ 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:55.244 13:58:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:28:55.244 13:58:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:55.244 13:58:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.244 13:58:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:55.244 ************************************ 00:28:55.244 START TEST locking_app_on_locked_coremask 00:28:55.244 ************************************ 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71282 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71282 /var/tmp/spdk.sock 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71282 ']' 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.244 13:58:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:55.244 [2024-10-09 13:58:01.740735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:55.244 [2024-10-09 13:58:01.742451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71282 ] 00:28:55.503 [2024-10-09 13:58:01.925328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.503 [2024-10-09 13:58:01.970815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71298 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71298 /var/tmp/spdk2.sock 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71298 /var/tmp/spdk2.sock 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71298 /var/tmp/spdk2.sock 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71298 ']' 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:56.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.069 13:58:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:56.328 [2024-10-09 13:58:02.674013] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:56.328 [2024-10-09 13:58:02.674355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71298 ] 00:28:56.328 [2024-10-09 13:58:02.842055] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71282 has claimed it. 00:28:56.328 [2024-10-09 13:58:02.842127] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:56.895 ERROR: process (pid: 71298) is no longer running 00:28:56.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71298) - No such process 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71282 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71282 00:28:56.895 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71282 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71282 ']' 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71282 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71282 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71282' 00:28:57.523 killing process with pid 71282 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71282 00:28:57.523 13:58:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71282 00:28:58.105 00:28:58.105 real 0m2.754s 00:28:58.105 user 0m3.066s 00:28:58.105 sys 0m0.892s 00:28:58.105 13:58:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:58.105 13:58:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:58.105 ************************************ 00:28:58.105 END TEST locking_app_on_locked_coremask 00:28:58.105 ************************************ 00:28:58.105 13:58:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:28:58.105 13:58:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:58.105 13:58:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:58.105 13:58:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:58.105 ************************************ 00:28:58.105 START TEST locking_overlapped_coremask 00:28:58.105 ************************************ 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:28:58.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71351 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71351 /var/tmp/spdk.sock 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.105 13:58:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:58.105 [2024-10-09 13:58:04.521900] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:58.105 [2024-10-09 13:58:04.522262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71351 ] 00:28:58.363 [2024-10-09 13:58:04.680087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.363 [2024-10-09 13:58:04.732203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.363 [2024-10-09 13:58:04.732296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.363 [2024-10-09 13:58:04.732375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71369 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71369 /var/tmp/spdk2.sock 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71369 /var/tmp/spdk2.sock 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71369 /var/tmp/spdk2.sock 00:28:58.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71369 ']' 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.932 13:58:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:28:59.191 [2024-10-09 13:58:05.525719] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:59.191 [2024-10-09 13:58:05.526501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71369 ] 00:28:59.191 [2024-10-09 13:58:05.696519] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71351 has claimed it. 00:28:59.191 [2024-10-09 13:58:05.696604] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:28:59.758 ERROR: process (pid: 71369) is no longer running 00:28:59.758 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71369) - No such process 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71351 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71351 ']' 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71351 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71351 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.758 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71351' 00:29:00.016 killing process with pid 71351 00:29:00.016 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71351 00:29:00.016 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71351 00:29:00.276 00:29:00.276 real 0m2.296s 00:29:00.276 user 0m6.302s 00:29:00.276 sys 0m0.592s 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:00.276 ************************************ 00:29:00.276 END TEST locking_overlapped_coremask 00:29:00.276 ************************************ 00:29:00.276 13:58:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:29:00.276 13:58:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:00.276 13:58:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.276 13:58:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:00.276 ************************************ 00:29:00.276 START TEST locking_overlapped_coremask_via_rpc 00:29:00.276 ************************************ 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71416 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71416 /var/tmp/spdk.sock 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71416 ']' 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.276 13:58:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:00.535 [2024-10-09 13:58:06.904586] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:00.535 [2024-10-09 13:58:06.904832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71416 ] 00:29:00.793 [2024-10-09 13:58:07.085959] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:00.793 [2024-10-09 13:58:07.086019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.793 [2024-10-09 13:58:07.138678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.793 [2024-10-09 13:58:07.138746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.793 [2024-10-09 13:58:07.138767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71435 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71435 /var/tmp/spdk2.sock 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71435 ']' 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:01.362 13:58:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:01.620 [2024-10-09 13:58:07.953934] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:01.620 [2024-10-09 13:58:07.954082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71435 ] 00:29:01.620 [2024-10-09 13:58:08.126763] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:01.620 [2024-10-09 13:58:08.126839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:01.878 [2024-10-09 13:58:08.236832] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:29:01.878 [2024-10-09 13:58:08.236944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.878 [2024-10-09 13:58:08.237037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:02.492 [2024-10-09 13:58:08.948787] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71416 has claimed it. 00:29:02.492 request: 00:29:02.492 { 00:29:02.492 "method": "framework_enable_cpumask_locks", 00:29:02.492 "req_id": 1 00:29:02.492 } 00:29:02.492 Got JSON-RPC error response 00:29:02.492 response: 00:29:02.492 { 00:29:02.492 "code": -32603, 00:29:02.492 "message": "Failed to claim CPU core: 2" 00:29:02.492 } 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71416 /var/tmp/spdk.sock 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71416 ']' 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.492 13:58:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71435 /var/tmp/spdk2.sock 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71435 ']' 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:02.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.751 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:29:03.009 ************************************ 00:29:03.009 END TEST locking_overlapped_coremask_via_rpc 00:29:03.009 ************************************ 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:29:03.009 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:29:03.009 00:29:03.009 real 0m2.657s 00:29:03.009 user 0m1.371s 00:29:03.009 sys 0m0.212s 00:29:03.010 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:03.010 13:58:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:03.010 13:58:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:29:03.010 13:58:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71416 ]] 00:29:03.010 13:58:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71416 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71416 ']' 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71416 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71416 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.010 killing process with pid 71416 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71416' 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71416 00:29:03.010 13:58:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71416 00:29:03.578 13:58:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71435 ]] 00:29:03.578 13:58:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71435 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71435 ']' 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71435 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71435 00:29:03.578 killing process with pid 71435 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71435' 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71435 00:29:03.578 13:58:09 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71435 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71416 ]] 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71416 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71416 ']' 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71416 00:29:04.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71416) - No such process 00:29:04.145 Process with pid 71416 is not found 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71416 is not found' 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71435 ]] 00:29:04.145 Process with pid 71435 is not found 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71435 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71435 ']' 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71435 00:29:04.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71435) - No such process 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71435 is not found' 00:29:04.145 13:58:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:29:04.145 00:29:04.145 real 0m22.429s 00:29:04.145 user 0m37.594s 00:29:04.145 sys 0m7.468s 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.145 13:58:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:04.145 ************************************ 00:29:04.145 END TEST cpu_locks 00:29:04.145 ************************************ 00:29:04.145 00:29:04.145 real 0m51.308s 00:29:04.145 user 1m36.280s 00:29:04.145 sys 0m12.182s 00:29:04.145 13:58:10 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.146 ************************************ 00:29:04.146 END TEST event 00:29:04.146 ************************************ 00:29:04.146 13:58:10 event -- common/autotest_common.sh@10 -- # set +x 00:29:04.146 13:58:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:29:04.146 13:58:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.146 13:58:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.146 13:58:10 -- common/autotest_common.sh@10 -- # set +x 00:29:04.146 ************************************ 00:29:04.146 START TEST thread 00:29:04.146 ************************************ 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:29:04.146 * Looking for test storage... 00:29:04.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:04.146 13:58:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:04.146 13:58:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:04.146 13:58:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:04.146 13:58:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:29:04.146 13:58:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:29:04.146 13:58:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:29:04.146 13:58:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:29:04.146 13:58:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:29:04.146 13:58:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:29:04.146 13:58:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:29:04.146 13:58:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:04.146 13:58:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:29:04.146 13:58:10 thread -- scripts/common.sh@345 -- # : 1 00:29:04.146 13:58:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:04.146 13:58:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:04.146 13:58:10 thread -- scripts/common.sh@365 -- # decimal 1 00:29:04.146 13:58:10 thread -- scripts/common.sh@353 -- # local d=1 00:29:04.146 13:58:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:04.146 13:58:10 thread -- scripts/common.sh@355 -- # echo 1 00:29:04.146 13:58:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:29:04.146 13:58:10 thread -- scripts/common.sh@366 -- # decimal 2 00:29:04.146 13:58:10 thread -- scripts/common.sh@353 -- # local d=2 00:29:04.146 13:58:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:04.146 13:58:10 thread -- scripts/common.sh@355 -- # echo 2 00:29:04.146 13:58:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:29:04.146 13:58:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:04.146 13:58:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:04.146 13:58:10 thread -- scripts/common.sh@368 -- # return 0 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:04.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.146 --rc genhtml_branch_coverage=1 00:29:04.146 --rc genhtml_function_coverage=1 00:29:04.146 --rc genhtml_legend=1 00:29:04.146 --rc geninfo_all_blocks=1 00:29:04.146 --rc geninfo_unexecuted_blocks=1 00:29:04.146 00:29:04.146 ' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:04.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.146 --rc genhtml_branch_coverage=1 00:29:04.146 --rc genhtml_function_coverage=1 00:29:04.146 --rc genhtml_legend=1 00:29:04.146 --rc geninfo_all_blocks=1 00:29:04.146 --rc geninfo_unexecuted_blocks=1 00:29:04.146 00:29:04.146 ' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:04.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.146 --rc genhtml_branch_coverage=1 00:29:04.146 --rc genhtml_function_coverage=1 00:29:04.146 --rc genhtml_legend=1 00:29:04.146 --rc geninfo_all_blocks=1 00:29:04.146 --rc geninfo_unexecuted_blocks=1 00:29:04.146 00:29:04.146 ' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:04.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:04.146 --rc genhtml_branch_coverage=1 00:29:04.146 --rc genhtml_function_coverage=1 00:29:04.146 --rc genhtml_legend=1 00:29:04.146 --rc geninfo_all_blocks=1 00:29:04.146 --rc geninfo_unexecuted_blocks=1 00:29:04.146 00:29:04.146 ' 00:29:04.146 13:58:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.146 13:58:10 thread -- common/autotest_common.sh@10 -- # set +x 00:29:04.405 ************************************ 00:29:04.405 START TEST thread_poller_perf 00:29:04.405 ************************************ 00:29:04.405 13:58:10 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:04.405 [2024-10-09 13:58:10.737322] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:04.405 [2024-10-09 13:58:10.737458] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71567 ] 00:29:04.405 [2024-10-09 13:58:10.902778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.664 [2024-10-09 13:58:10.955414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.664 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:29:05.601 [2024-10-09T13:58:12.152Z] ====================================== 00:29:05.601 [2024-10-09T13:58:12.152Z] busy:2114489846 (cyc) 00:29:05.601 [2024-10-09T13:58:12.152Z] total_run_count: 347000 00:29:05.601 [2024-10-09T13:58:12.152Z] tsc_hz: 2100000000 (cyc) 00:29:05.601 [2024-10-09T13:58:12.152Z] ====================================== 00:29:05.601 [2024-10-09T13:58:12.152Z] poller_cost: 6093 (cyc), 2901 (nsec) 00:29:05.601 00:29:05.601 real 0m1.360s 00:29:05.601 user 0m1.141s 00:29:05.601 sys 0m0.111s 00:29:05.601 13:58:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:05.601 13:58:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:05.601 ************************************ 00:29:05.601 END TEST thread_poller_perf 00:29:05.601 ************************************ 00:29:05.601 13:58:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:05.601 13:58:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:29:05.601 13:58:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:05.601 13:58:12 thread -- common/autotest_common.sh@10 -- # set +x 00:29:05.601 ************************************ 00:29:05.601 START TEST thread_poller_perf 00:29:05.601 ************************************ 00:29:05.601 13:58:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:05.860 [2024-10-09 13:58:12.162843] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:05.860 [2024-10-09 13:58:12.162975] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71604 ] 00:29:05.860 [2024-10-09 13:58:12.320824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.860 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:29:05.860 [2024-10-09 13:58:12.367359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.237 [2024-10-09T13:58:13.788Z] ====================================== 00:29:07.237 [2024-10-09T13:58:13.788Z] busy:2103575420 (cyc) 00:29:07.237 [2024-10-09T13:58:13.788Z] total_run_count: 4940000 00:29:07.237 [2024-10-09T13:58:13.788Z] tsc_hz: 2100000000 (cyc) 00:29:07.237 [2024-10-09T13:58:13.788Z] ====================================== 00:29:07.237 [2024-10-09T13:58:13.788Z] poller_cost: 425 (cyc), 202 (nsec) 00:29:07.237 00:29:07.237 real 0m1.341s 00:29:07.237 user 0m1.126s 00:29:07.237 sys 0m0.110s 00:29:07.237 13:58:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.237 13:58:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:07.237 ************************************ 00:29:07.237 END TEST thread_poller_perf 00:29:07.237 ************************************ 00:29:07.237 13:58:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:29:07.237 00:29:07.237 real 0m3.024s 00:29:07.237 user 0m2.437s 00:29:07.237 sys 0m0.379s 00:29:07.237 13:58:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.237 13:58:13 thread -- common/autotest_common.sh@10 -- # set +x 00:29:07.237 ************************************ 00:29:07.237 END TEST thread 00:29:07.237 ************************************ 00:29:07.237 13:58:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:29:07.237 13:58:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:29:07.237 13:58:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:07.237 13:58:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.237 13:58:13 -- common/autotest_common.sh@10 -- # set +x 00:29:07.237 ************************************ 00:29:07.237 START TEST app_cmdline 00:29:07.237 ************************************ 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:29:07.237 * Looking for test storage... 00:29:07.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.237 13:58:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.237 --rc genhtml_branch_coverage=1 00:29:07.237 --rc genhtml_function_coverage=1 00:29:07.237 --rc genhtml_legend=1 00:29:07.237 --rc geninfo_all_blocks=1 00:29:07.237 --rc geninfo_unexecuted_blocks=1 00:29:07.237 00:29:07.237 ' 00:29:07.237 13:58:13 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.238 --rc genhtml_branch_coverage=1 00:29:07.238 --rc genhtml_function_coverage=1 00:29:07.238 --rc genhtml_legend=1 00:29:07.238 --rc geninfo_all_blocks=1 00:29:07.238 --rc geninfo_unexecuted_blocks=1 00:29:07.238 00:29:07.238 ' 00:29:07.238 13:58:13 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:07.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.238 --rc genhtml_branch_coverage=1 00:29:07.238 --rc genhtml_function_coverage=1 00:29:07.238 --rc genhtml_legend=1 00:29:07.238 --rc geninfo_all_blocks=1 00:29:07.238 --rc geninfo_unexecuted_blocks=1 00:29:07.238 00:29:07.238 ' 00:29:07.238 13:58:13 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:07.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.238 --rc genhtml_branch_coverage=1 00:29:07.238 --rc genhtml_function_coverage=1 00:29:07.238 --rc genhtml_legend=1 00:29:07.238 --rc geninfo_all_blocks=1 00:29:07.238 --rc geninfo_unexecuted_blocks=1 00:29:07.238 00:29:07.238 ' 00:29:07.238 13:58:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:29:07.238 13:58:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71687 00:29:07.238 13:58:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:29:07.238 13:58:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71687 00:29:07.238 13:58:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71687 ']' 00:29:07.510 13:58:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.510 13:58:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.510 13:58:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.510 13:58:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.510 13:58:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:07.510 [2024-10-09 13:58:13.921596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:07.510 [2024-10-09 13:58:13.922056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71687 ] 00:29:07.821 [2024-10-09 13:58:14.102904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.821 [2024-10-09 13:58:14.158349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.386 13:58:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.386 13:58:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:29:08.386 13:58:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:29:08.645 { 00:29:08.645 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:29:08.645 "fields": { 00:29:08.645 "major": 24, 00:29:08.645 "minor": 9, 00:29:08.645 "patch": 1, 00:29:08.645 "suffix": "-pre", 00:29:08.645 "commit": "b18e1bd62" 00:29:08.645 } 00:29:08.645 } 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:29:08.645 13:58:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:08.645 13:58:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:29:08.903 request: 00:29:08.903 { 00:29:08.903 "method": "env_dpdk_get_mem_stats", 00:29:08.903 "req_id": 1 00:29:08.903 } 00:29:08.903 Got JSON-RPC error response 00:29:08.903 response: 00:29:08.903 { 00:29:08.903 "code": -32601, 00:29:08.903 "message": "Method not found" 00:29:08.903 } 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.903 13:58:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71687 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71687 ']' 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71687 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71687 00:29:08.903 killing process with pid 71687 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71687' 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 71687 00:29:08.903 13:58:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 71687 00:29:09.476 00:29:09.476 real 0m2.273s 00:29:09.476 user 0m2.616s 00:29:09.476 sys 0m0.675s 00:29:09.476 13:58:15 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.476 13:58:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:29:09.476 ************************************ 00:29:09.476 END TEST app_cmdline 00:29:09.476 ************************************ 00:29:09.476 13:58:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:29:09.477 13:58:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:09.477 13:58:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.477 13:58:15 -- common/autotest_common.sh@10 -- # set +x 00:29:09.477 ************************************ 00:29:09.477 START TEST version 00:29:09.477 ************************************ 00:29:09.477 13:58:15 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:29:09.477 * Looking for test storage... 00:29:09.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:29:09.477 13:58:16 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:09.477 13:58:16 version -- common/autotest_common.sh@1681 -- # lcov --version 00:29:09.477 13:58:16 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:09.741 13:58:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.741 13:58:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.741 13:58:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.741 13:58:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.741 13:58:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.741 13:58:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.741 13:58:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.741 13:58:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.741 13:58:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.741 13:58:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.741 13:58:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.741 13:58:16 version -- scripts/common.sh@344 -- # case "$op" in 00:29:09.741 13:58:16 version -- scripts/common.sh@345 -- # : 1 00:29:09.741 13:58:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.741 13:58:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.741 13:58:16 version -- scripts/common.sh@365 -- # decimal 1 00:29:09.741 13:58:16 version -- scripts/common.sh@353 -- # local d=1 00:29:09.741 13:58:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.741 13:58:16 version -- scripts/common.sh@355 -- # echo 1 00:29:09.741 13:58:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.741 13:58:16 version -- scripts/common.sh@366 -- # decimal 2 00:29:09.741 13:58:16 version -- scripts/common.sh@353 -- # local d=2 00:29:09.741 13:58:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.741 13:58:16 version -- scripts/common.sh@355 -- # echo 2 00:29:09.741 13:58:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.741 13:58:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.741 13:58:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.741 13:58:16 version -- scripts/common.sh@368 -- # return 0 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.741 --rc genhtml_branch_coverage=1 00:29:09.741 --rc genhtml_function_coverage=1 00:29:09.741 --rc genhtml_legend=1 00:29:09.741 --rc geninfo_all_blocks=1 00:29:09.741 --rc geninfo_unexecuted_blocks=1 00:29:09.741 00:29:09.741 ' 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.741 --rc genhtml_branch_coverage=1 00:29:09.741 --rc genhtml_function_coverage=1 00:29:09.741 --rc genhtml_legend=1 00:29:09.741 --rc geninfo_all_blocks=1 00:29:09.741 --rc geninfo_unexecuted_blocks=1 00:29:09.741 00:29:09.741 ' 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.741 --rc genhtml_branch_coverage=1 00:29:09.741 --rc genhtml_function_coverage=1 00:29:09.741 --rc genhtml_legend=1 00:29:09.741 --rc geninfo_all_blocks=1 00:29:09.741 --rc geninfo_unexecuted_blocks=1 00:29:09.741 00:29:09.741 ' 00:29:09.741 13:58:16 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:09.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.741 --rc genhtml_branch_coverage=1 00:29:09.741 --rc genhtml_function_coverage=1 00:29:09.742 --rc genhtml_legend=1 00:29:09.742 --rc geninfo_all_blocks=1 00:29:09.742 --rc geninfo_unexecuted_blocks=1 00:29:09.742 00:29:09.742 ' 00:29:09.742 13:58:16 version -- app/version.sh@17 -- # get_header_version major 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # cut -f2 00:29:09.742 13:58:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # tr -d '"' 00:29:09.742 13:58:16 version -- app/version.sh@17 -- # major=24 00:29:09.742 13:58:16 version -- app/version.sh@18 -- # get_header_version minor 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # cut -f2 00:29:09.742 13:58:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # tr -d '"' 00:29:09.742 13:58:16 version -- app/version.sh@18 -- # minor=9 00:29:09.742 13:58:16 version -- app/version.sh@19 -- # get_header_version patch 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # tr -d '"' 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # cut -f2 00:29:09.742 13:58:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:09.742 13:58:16 version -- app/version.sh@19 -- # patch=1 00:29:09.742 13:58:16 version -- app/version.sh@20 -- # get_header_version suffix 00:29:09.742 13:58:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # tr -d '"' 00:29:09.742 13:58:16 version -- app/version.sh@14 -- # cut -f2 00:29:09.742 13:58:16 version -- app/version.sh@20 -- # suffix=-pre 00:29:09.742 13:58:16 version -- app/version.sh@22 -- # version=24.9 00:29:09.742 13:58:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:29:09.742 13:58:16 version -- app/version.sh@25 -- # version=24.9.1 00:29:09.742 13:58:16 version -- app/version.sh@28 -- # version=24.9.1rc0 00:29:09.742 13:58:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:29:09.742 13:58:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:29:09.742 13:58:16 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:29:09.742 13:58:16 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:29:09.742 00:29:09.742 real 0m0.288s 00:29:09.742 user 0m0.181s 00:29:09.742 sys 0m0.158s 00:29:09.742 13:58:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.742 13:58:16 version -- common/autotest_common.sh@10 -- # set +x 00:29:09.742 ************************************ 00:29:09.742 END TEST version 00:29:09.742 ************************************ 00:29:09.742 13:58:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:29:09.742 13:58:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:29:09.742 13:58:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:29:09.742 13:58:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:09.742 13:58:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:09.742 13:58:16 -- common/autotest_common.sh@10 -- # set +x 00:29:09.742 ************************************ 00:29:09.742 START TEST bdev_raid 00:29:09.742 ************************************ 00:29:09.742 13:58:16 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:29:10.000 * Looking for test storage... 00:29:10.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:10.000 13:58:16 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:10.000 13:58:16 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:29:10.000 13:58:16 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:10.000 13:58:16 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:10.000 13:58:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.001 13:58:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:10.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.001 --rc genhtml_branch_coverage=1 00:29:10.001 --rc genhtml_function_coverage=1 00:29:10.001 --rc genhtml_legend=1 00:29:10.001 --rc geninfo_all_blocks=1 00:29:10.001 --rc geninfo_unexecuted_blocks=1 00:29:10.001 00:29:10.001 ' 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:10.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.001 --rc genhtml_branch_coverage=1 00:29:10.001 --rc genhtml_function_coverage=1 00:29:10.001 --rc genhtml_legend=1 00:29:10.001 --rc geninfo_all_blocks=1 00:29:10.001 --rc geninfo_unexecuted_blocks=1 00:29:10.001 00:29:10.001 ' 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:10.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.001 --rc genhtml_branch_coverage=1 00:29:10.001 --rc genhtml_function_coverage=1 00:29:10.001 --rc genhtml_legend=1 00:29:10.001 --rc geninfo_all_blocks=1 00:29:10.001 --rc geninfo_unexecuted_blocks=1 00:29:10.001 00:29:10.001 ' 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:10.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.001 --rc genhtml_branch_coverage=1 00:29:10.001 --rc genhtml_function_coverage=1 00:29:10.001 --rc genhtml_legend=1 00:29:10.001 --rc geninfo_all_blocks=1 00:29:10.001 --rc geninfo_unexecuted_blocks=1 00:29:10.001 00:29:10.001 ' 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:10.001 13:58:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:29:10.001 13:58:16 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.001 13:58:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:10.001 ************************************ 00:29:10.001 START TEST raid1_resize_data_offset_test 00:29:10.001 ************************************ 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71853 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71853' 00:29:10.001 Process raid pid: 71853 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71853 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71853 ']' 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.001 13:58:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.001 [2024-10-09 13:58:16.548517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:10.001 [2024-10-09 13:58:16.548976] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.260 [2024-10-09 13:58:16.730188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.260 [2024-10-09 13:58:16.776318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.518 [2024-10-09 13:58:16.820312] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:10.518 [2024-10-09 13:58:16.820533] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 malloc0 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 malloc1 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 null0 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 [2024-10-09 13:58:17.528943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:29:11.084 [2024-10-09 13:58:17.531230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:11.084 [2024-10-09 13:58:17.531274] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:29:11.084 [2024-10-09 13:58:17.531399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:11.084 [2024-10-09 13:58:17.531411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:29:11.084 [2024-10-09 13:58:17.531699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:29:11.084 [2024-10-09 13:58:17.531853] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:11.084 [2024-10-09 13:58:17.531878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:29:11.084 [2024-10-09 13:58:17.532001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.084 [2024-10-09 13:58:17.588954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.084 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.343 malloc2 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.343 [2024-10-09 13:58:17.717570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:11.343 [2024-10-09 13:58:17.721971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.343 [2024-10-09 13:58:17.724497] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71853 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71853 ']' 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71853 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71853 00:29:11.343 killing process with pid 71853 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71853' 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71853 00:29:11.343 13:58:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71853 00:29:11.343 [2024-10-09 13:58:17.808737] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:11.343 [2024-10-09 13:58:17.809752] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:29:11.343 [2024-10-09 13:58:17.809990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.343 [2024-10-09 13:58:17.810020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:29:11.343 [2024-10-09 13:58:17.816439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:11.343 [2024-10-09 13:58:17.816885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:11.343 [2024-10-09 13:58:17.817026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:29:11.602 [2024-10-09 13:58:18.044504] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:11.860 13:58:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:29:11.860 00:29:11.860 real 0m1.862s 00:29:11.860 user 0m1.878s 00:29:11.860 sys 0m0.529s 00:29:11.860 13:58:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:11.860 13:58:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.860 ************************************ 00:29:11.860 END TEST raid1_resize_data_offset_test 00:29:11.860 ************************************ 00:29:11.860 13:58:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:29:11.860 13:58:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:11.860 13:58:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:11.860 13:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:11.860 ************************************ 00:29:11.860 START TEST raid0_resize_superblock_test 00:29:11.860 ************************************ 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71909 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71909' 00:29:11.860 Process raid pid: 71909 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71909 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71909 ']' 00:29:11.860 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.861 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.861 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.861 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.861 13:58:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.119 [2024-10-09 13:58:18.470735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:12.119 [2024-10-09 13:58:18.471261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.119 [2024-10-09 13:58:18.650364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.377 [2024-10-09 13:58:18.700629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.377 [2024-10-09 13:58:18.745678] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.377 [2024-10-09 13:58:18.745716] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:12.944 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.944 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:29:12.944 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:29:12.944 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.944 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 malloc0 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 [2024-10-09 13:58:19.508370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:13.202 [2024-10-09 13:58:19.508448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.202 [2024-10-09 13:58:19.508480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:13.202 [2024-10-09 13:58:19.508496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.202 [2024-10-09 13:58:19.511356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.202 [2024-10-09 13:58:19.511406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:13.202 pt0 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 08752ee1-15ff-4888-8517-b9aa2491871c 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 5e73a260-b258-44fa-a89d-410e385ae3e5 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 9d3204a4-f9ce-40a1-a673-4264ac9b3afd 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.202 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.202 [2024-10-09 13:58:19.656873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5e73a260-b258-44fa-a89d-410e385ae3e5 is claimed 00:29:13.202 [2024-10-09 13:58:19.656988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9d3204a4-f9ce-40a1-a673-4264ac9b3afd is claimed 00:29:13.202 [2024-10-09 13:58:19.657131] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:13.202 [2024-10-09 13:58:19.657147] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:29:13.202 [2024-10-09 13:58:19.657571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:13.202 [2024-10-09 13:58:19.657856] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:13.203 [2024-10-09 13:58:19.657874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:29:13.203 [2024-10-09 13:58:19.658065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.203 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.203 [2024-10-09 13:58:19.749181] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:29:13.461 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 [2024-10-09 13:58:19.797128] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:13.462 [2024-10-09 13:58:19.797163] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5e73a260-b258-44fa-a89d-410e385ae3e5' was resized: old size 131072, new size 204800 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 [2024-10-09 13:58:19.805006] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:13.462 [2024-10-09 13:58:19.805035] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9d3204a4-f9ce-40a1-a673-4264ac9b3afd' was resized: old size 131072, new size 204800 00:29:13.462 [2024-10-09 13:58:19.805070] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:29:13.462 [2024-10-09 13:58:19.925164] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 [2024-10-09 13:58:19.972969] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:29:13.462 [2024-10-09 13:58:19.973061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:29:13.462 [2024-10-09 13:58:19.973078] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:13.462 [2024-10-09 13:58:19.973098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:29:13.462 [2024-10-09 13:58:19.973254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:13.462 [2024-10-09 13:58:19.973303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:13.462 [2024-10-09 13:58:19.973321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 [2024-10-09 13:58:19.980872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:13.462 [2024-10-09 13:58:19.980947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:13.462 [2024-10-09 13:58:19.980975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:13.462 [2024-10-09 13:58:19.980994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:13.462 [2024-10-09 13:58:19.984029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:13.462 [2024-10-09 13:58:19.984077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:13.462 pt0 00:29:13.462 [2024-10-09 13:58:19.985839] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5e73a260-b258-44fa-a89d-410e385ae3e5 00:29:13.462 [2024-10-09 13:58:19.985916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5e73a260-b258-44fa-a89d-410e385ae3e5 is claimed 00:29:13.462 [2024-10-09 13:58:19.986007] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9d3204a4-f9ce-40a1-a673-4264ac9b3afd 00:29:13.462 [2024-10-09 13:58:19.986034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9d3204a4-f9ce-40a1-a673-4264ac9b3afd is claimed 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 [2024-10-09 13:58:19.986125] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9d3204a4-f9ce-40a1-a673-4264ac9b3afd (2) smaller than existing raid bdev Raid (3) 00:29:13.462 [2024-10-09 13:58:19.986150] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 5e73a260-b258-44fa-a89d-410e385ae3e5: File exists 00:29:13.462 [2024-10-09 13:58:19.986196] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:29:13.462 [2024-10-09 13:58:19.986210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:29:13.462 [2024-10-09 13:58:19.986477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 [2024-10-09 13:58:19.986628] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:29:13.462 [2024-10-09 13:58:19.986640] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:29:13.462 [2024-10-09 13:58:19.986769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.462 13:58:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.462 [2024-10-09 13:58:20.001248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.720 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:13.720 13:58:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71909 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71909 ']' 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71909 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71909 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71909' 00:29:13.720 killing process with pid 71909 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71909 00:29:13.720 [2024-10-09 13:58:20.084174] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:13.720 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71909 00:29:13.720 [2024-10-09 13:58:20.084267] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:13.720 [2024-10-09 13:58:20.084326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:13.720 [2024-10-09 13:58:20.084339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:29:13.721 [2024-10-09 13:58:20.249502] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:13.978 ************************************ 00:29:13.978 END TEST raid0_resize_superblock_test 00:29:13.978 ************************************ 00:29:13.978 13:58:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:29:13.978 00:29:13.978 real 0m2.143s 00:29:13.978 user 0m2.496s 00:29:13.978 sys 0m0.561s 00:29:13.978 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.978 13:58:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.237 13:58:20 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:29:14.237 13:58:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:14.237 13:58:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.237 13:58:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:14.237 ************************************ 00:29:14.237 START TEST raid1_resize_superblock_test 00:29:14.237 ************************************ 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71980 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71980' 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:14.237 Process raid pid: 71980 00:29:14.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71980 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71980 ']' 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:14.237 13:58:20 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.237 [2024-10-09 13:58:20.673795] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:14.237 [2024-10-09 13:58:20.674358] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.496 [2024-10-09 13:58:20.852859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.496 [2024-10-09 13:58:20.903191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.496 [2024-10-09 13:58:20.948400] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:14.496 [2024-10-09 13:58:20.948707] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:15.061 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.321 malloc0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.321 [2024-10-09 13:58:21.735416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:15.321 [2024-10-09 13:58:21.735485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.321 [2024-10-09 13:58:21.735517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:15.321 [2024-10-09 13:58:21.735532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.321 [2024-10-09 13:58:21.738031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.321 [2024-10-09 13:58:21.738075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:15.321 pt0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.321 903d554e-39b1-4618-be70-90896ee0e6d5 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.321 35c2d00d-f761-4c98-a7db-e5450ec6cec0 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.321 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 40b8701d-920c-460f-9f73-1b8fc42a4be5 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 [2024-10-09 13:58:21.877522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 35c2d00d-f761-4c98-a7db-e5450ec6cec0 is claimed 00:29:15.581 [2024-10-09 13:58:21.877627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 40b8701d-920c-460f-9f73-1b8fc42a4be5 is claimed 00:29:15.581 [2024-10-09 13:58:21.877799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:15.581 [2024-10-09 13:58:21.877825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:29:15.581 [2024-10-09 13:58:21.878168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:15.581 [2024-10-09 13:58:21.878342] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:15.581 [2024-10-09 13:58:21.878355] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:29:15.581 [2024-10-09 13:58:21.878503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 [2024-10-09 13:58:21.961837] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 [2024-10-09 13:58:21.989766] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:15.581 [2024-10-09 13:58:21.989901] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '35c2d00d-f761-4c98-a7db-e5450ec6cec0' was resized: old size 131072, new size 204800 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 [2024-10-09 13:58:21.997626] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:15.581 [2024-10-09 13:58:21.997649] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '40b8701d-920c-460f-9f73-1b8fc42a4be5' was resized: old size 131072, new size 204800 00:29:15.581 [2024-10-09 13:58:21.997681] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.581 [2024-10-09 13:58:22.097779] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.581 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.841 [2024-10-09 13:58:22.137620] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:29:15.841 [2024-10-09 13:58:22.137853] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:29:15.841 [2024-10-09 13:58:22.137929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:29:15.841 [2024-10-09 13:58:22.138184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:15.841 [2024-10-09 13:58:22.138463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:15.841 [2024-10-09 13:58:22.138655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:15.841 [2024-10-09 13:58:22.138785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.841 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.841 [2024-10-09 13:58:22.145493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:29:15.842 [2024-10-09 13:58:22.145685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.842 [2024-10-09 13:58:22.145809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:15.842 [2024-10-09 13:58:22.145899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.842 [2024-10-09 13:58:22.148644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.842 [2024-10-09 13:58:22.148777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:29:15.842 pt0 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 [2024-10-09 13:58:22.150427] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 35c2d00d-f761-4c98-a7db-e5450ec6cec0 00:29:15.842 [2024-10-09 13:58:22.150487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 35c2d00d-f761-4c98-a7db-e5450ec6cec0 is claimed 00:29:15.842 [2024-10-09 13:58:22.150587] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 40b8701d-920c-460f-9f73-1b8fc42a4be5 00:29:15.842 [2024-10-09 13:58:22.150614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 40b8701d-920c-460f-9f73-1b8fc42a4be5 is claimed 00:29:15.842 [2024-10-09 13:58:22.150717] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 40b8701d-920c-460f-9f73-1b8fc42a4be5 (2) smaller than existing raid bdev Raid (3) 00:29:15.842 [2024-10-09 13:58:22.150750] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 35c2d00d-f761-4c98-a7db-e5450ec6cec0: File exists 00:29:15.842 [2024-10-09 13:58:22.150800] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:29:15.842 [2024-10-09 13:58:22.150813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:29:15.842 [2024-10-09 13:58:22.151076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:29:15.842 [2024-10-09 13:58:22.151209] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:29:15.842 [2024-10-09 13:58:22.151219] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 [2024-10-09 13:58:22.151335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.842 [2024-10-09 13:58:22.165855] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71980 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71980 ']' 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71980 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71980 00:29:15.842 killing process with pid 71980 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71980' 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71980 00:29:15.842 [2024-10-09 13:58:22.259576] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:15.842 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71980 00:29:15.842 [2024-10-09 13:58:22.259665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:15.842 [2024-10-09 13:58:22.259719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:15.842 [2024-10-09 13:58:22.259730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:29:16.143 [2024-10-09 13:58:22.420827] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:16.143 ************************************ 00:29:16.143 END TEST raid1_resize_superblock_test 00:29:16.143 ************************************ 00:29:16.143 13:58:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:29:16.143 00:29:16.143 real 0m2.103s 00:29:16.143 user 0m2.421s 00:29:16.143 sys 0m0.553s 00:29:16.143 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.143 13:58:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:29:16.403 13:58:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:29:16.403 13:58:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:16.403 13:58:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.403 13:58:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:16.403 ************************************ 00:29:16.403 START TEST raid_function_test_raid0 00:29:16.403 ************************************ 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=72055 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:16.403 Process raid pid: 72055 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72055' 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 72055 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 72055 ']' 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.403 13:58:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:16.403 [2024-10-09 13:58:22.864062] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:16.403 [2024-10-09 13:58:22.864766] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.662 [2024-10-09 13:58:23.045990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.662 [2024-10-09 13:58:23.094888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.662 [2024-10-09 13:58:23.139154] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:16.662 [2024-10-09 13:58:23.139189] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:17.601 Base_1 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:17.601 Base_2 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.601 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:17.601 [2024-10-09 13:58:23.878030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:17.601 [2024-10-09 13:58:23.880473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:17.601 [2024-10-09 13:58:23.880542] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:17.602 [2024-10-09 13:58:23.880563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:17.602 [2024-10-09 13:58:23.880910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:17.602 [2024-10-09 13:58:23.881044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:17.602 [2024-10-09 13:58:23.881055] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:29:17.602 [2024-10-09 13:58:23.881209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:17.602 13:58:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:29:17.861 [2024-10-09 13:58:24.170089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:17.861 /dev/nbd0 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:17.861 1+0 records in 00:29:17.861 1+0 records out 00:29:17.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441626 s, 9.3 MB/s 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:29:17.861 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:18.149 { 00:29:18.149 "nbd_device": "/dev/nbd0", 00:29:18.149 "bdev_name": "raid" 00:29:18.149 } 00:29:18.149 ]' 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:18.149 { 00:29:18.149 "nbd_device": "/dev/nbd0", 00:29:18.149 "bdev_name": "raid" 00:29:18.149 } 00:29:18.149 ]' 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:29:18.149 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:29:18.150 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:29:18.150 4096+0 records in 00:29:18.150 4096+0 records out 00:29:18.150 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.03826 s, 54.8 MB/s 00:29:18.150 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:29:18.408 4096+0 records in 00:29:18.408 4096+0 records out 00:29:18.408 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.284748 s, 7.4 MB/s 00:29:18.408 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:29:18.408 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:29:18.669 128+0 records in 00:29:18.669 128+0 records out 00:29:18.669 65536 bytes (66 kB, 64 KiB) copied, 0.00158786 s, 41.3 MB/s 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:29:18.669 13:58:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:29:18.669 2035+0 records in 00:29:18.669 2035+0 records out 00:29:18.669 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0193336 s, 53.9 MB/s 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:29:18.669 456+0 records in 00:29:18.669 456+0 records out 00:29:18.669 233472 bytes (233 kB, 228 KiB) copied, 0.00524786 s, 44.5 MB/s 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:18.669 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:18.929 [2024-10-09 13:58:25.355978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:29:18.929 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 72055 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 72055 ']' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 72055 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:19.188 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72055 00:29:19.448 killing process with pid 72055 00:29:19.448 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:19.449 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:19.449 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72055' 00:29:19.449 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 72055 00:29:19.449 [2024-10-09 13:58:25.741882] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:19.449 [2024-10-09 13:58:25.741993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.449 13:58:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 72055 00:29:19.449 [2024-10-09 13:58:25.742046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:19.449 [2024-10-09 13:58:25.742061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:29:19.449 [2024-10-09 13:58:25.766652] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:19.711 ************************************ 00:29:19.711 END TEST raid_function_test_raid0 00:29:19.711 ************************************ 00:29:19.711 13:58:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:29:19.711 00:29:19.711 real 0m3.261s 00:29:19.711 user 0m4.096s 00:29:19.711 sys 0m1.172s 00:29:19.711 13:58:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.711 13:58:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:29:19.711 13:58:26 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:29:19.711 13:58:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:19.711 13:58:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.711 13:58:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:19.711 ************************************ 00:29:19.711 START TEST raid_function_test_concat 00:29:19.711 ************************************ 00:29:19.711 Process raid pid: 72178 00:29:19.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72178 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72178' 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72178 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72178 ']' 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.711 13:58:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:19.711 [2024-10-09 13:58:26.180856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:19.711 [2024-10-09 13:58:26.181205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.970 [2024-10-09 13:58:26.340394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.970 [2024-10-09 13:58:26.388663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.970 [2024-10-09 13:58:26.431894] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:19.970 [2024-10-09 13:58:26.432131] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:20.907 Base_1 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:20.907 Base_2 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:20.907 [2024-10-09 13:58:27.225611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:20.907 [2024-10-09 13:58:27.228400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:20.907 [2024-10-09 13:58:27.228483] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:20.907 [2024-10-09 13:58:27.228500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:20.907 [2024-10-09 13:58:27.228869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:20.907 [2024-10-09 13:58:27.229022] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:20.907 [2024-10-09 13:58:27.229035] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:29:20.907 [2024-10-09 13:58:27.229192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:20.907 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:20.908 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:29:21.168 [2024-10-09 13:58:27.573675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:21.168 /dev/nbd0 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:21.168 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:21.169 1+0 records in 00:29:21.169 1+0 records out 00:29:21.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352507 s, 11.6 MB/s 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:29:21.169 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:29:21.430 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:21.430 { 00:29:21.430 "nbd_device": "/dev/nbd0", 00:29:21.430 "bdev_name": "raid" 00:29:21.430 } 00:29:21.430 ]' 00:29:21.430 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:21.430 { 00:29:21.430 "nbd_device": "/dev/nbd0", 00:29:21.430 "bdev_name": "raid" 00:29:21.430 } 00:29:21.430 ]' 00:29:21.430 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:21.689 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:21.689 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:21.689 13:58:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:29:21.689 4096+0 records in 00:29:21.689 4096+0 records out 00:29:21.689 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.028397 s, 73.9 MB/s 00:29:21.689 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:29:21.947 4096+0 records in 00:29:21.947 4096+0 records out 00:29:21.947 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.299702 s, 7.0 MB/s 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:29:21.947 128+0 records in 00:29:21.947 128+0 records out 00:29:21.947 65536 bytes (66 kB, 64 KiB) copied, 0.000781671 s, 83.8 MB/s 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:29:21.947 2035+0 records in 00:29:21.947 2035+0 records out 00:29:21.947 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144946 s, 71.9 MB/s 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:21.947 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:29:21.948 456+0 records in 00:29:21.948 456+0 records out 00:29:21.948 233472 bytes (233 kB, 228 KiB) copied, 0.0027294 s, 85.5 MB/s 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.948 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:22.206 [2024-10-09 13:58:28.668003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:29:22.206 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:29:22.465 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:22.465 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:22.465 13:58:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72178 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72178 ']' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72178 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72178 00:29:22.723 killing process with pid 72178 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72178' 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72178 00:29:22.723 [2024-10-09 13:58:29.094645] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:22.723 [2024-10-09 13:58:29.094759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:22.723 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72178 00:29:22.723 [2024-10-09 13:58:29.094819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:22.723 [2024-10-09 13:58:29.094835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:29:22.723 [2024-10-09 13:58:29.119884] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:22.983 ************************************ 00:29:22.983 END TEST raid_function_test_concat 00:29:22.983 ************************************ 00:29:22.983 13:58:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:29:22.983 00:29:22.983 real 0m3.307s 00:29:22.983 user 0m4.251s 00:29:22.983 sys 0m1.091s 00:29:22.983 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:22.983 13:58:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:29:22.983 13:58:29 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:29:22.983 13:58:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:22.983 13:58:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:22.983 13:58:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:22.983 ************************************ 00:29:22.983 START TEST raid0_resize_test 00:29:22.983 ************************************ 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:29:22.983 Process raid pid: 72300 00:29:22.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72300 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72300' 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72300 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72300 ']' 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:22.983 13:58:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.241 [2024-10-09 13:58:29.542295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:23.241 [2024-10-09 13:58:29.542776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.241 [2024-10-09 13:58:29.728839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.500 [2024-10-09 13:58:29.794495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.500 [2024-10-09 13:58:29.849623] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:23.500 [2024-10-09 13:58:29.850016] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.065 Base_1 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.065 Base_2 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.065 [2024-10-09 13:58:30.593678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:24.065 [2024-10-09 13:58:30.596446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:24.065 [2024-10-09 13:58:30.596527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:24.065 [2024-10-09 13:58:30.596543] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:24.065 [2024-10-09 13:58:30.596903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:29:24.065 [2024-10-09 13:58:30.597033] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:24.065 [2024-10-09 13:58:30.597045] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:29:24.065 [2024-10-09 13:58:30.597210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:29:24.065 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.066 [2024-10-09 13:58:30.601625] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:24.066 [2024-10-09 13:58:30.601657] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:29:24.066 true 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.066 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:29:24.066 [2024-10-09 13:58:30.613870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.324 [2024-10-09 13:58:30.653660] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:24.324 [2024-10-09 13:58:30.653691] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:29:24.324 [2024-10-09 13:58:30.653738] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:29:24.324 true 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.324 [2024-10-09 13:58:30.665897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72300 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72300 ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72300 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72300 00:29:24.324 killing process with pid 72300 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72300' 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72300 00:29:24.324 [2024-10-09 13:58:30.742867] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:24.324 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72300 00:29:24.324 [2024-10-09 13:58:30.742961] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:24.324 [2024-10-09 13:58:30.743014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:24.324 [2024-10-09 13:58:30.743027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:29:24.324 [2024-10-09 13:58:30.744864] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:24.587 13:58:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:29:24.587 00:29:24.587 real 0m1.564s 00:29:24.587 user 0m1.860s 00:29:24.587 sys 0m0.375s 00:29:24.587 ************************************ 00:29:24.587 END TEST raid0_resize_test 00:29:24.587 ************************************ 00:29:24.587 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:24.587 13:58:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.587 13:58:31 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:29:24.587 13:58:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:24.587 13:58:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:24.587 13:58:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:24.587 ************************************ 00:29:24.587 START TEST raid1_resize_test 00:29:24.587 ************************************ 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72345 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72345' 00:29:24.587 Process raid pid: 72345 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72345 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72345 ']' 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.587 13:58:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.847 [2024-10-09 13:58:31.165120] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:24.847 [2024-10-09 13:58:31.165666] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.848 [2024-10-09 13:58:31.357830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.106 [2024-10-09 13:58:31.415339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.106 [2024-10-09 13:58:31.468603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:25.106 [2024-10-09 13:58:31.468831] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.673 Base_1 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.673 Base_2 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.673 [2024-10-09 13:58:32.182927] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:29:25.673 [2024-10-09 13:58:32.185292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:29:25.673 [2024-10-09 13:58:32.185359] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:25.673 [2024-10-09 13:58:32.185379] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:25.673 [2024-10-09 13:58:32.185689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:29:25.673 [2024-10-09 13:58:32.185830] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:25.673 [2024-10-09 13:58:32.185847] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:29:25.673 [2024-10-09 13:58:32.185981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.673 [2024-10-09 13:58:32.194899] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:25.673 [2024-10-09 13:58:32.195055] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:29:25.673 true 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.673 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.673 [2024-10-09 13:58:32.211121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.932 [2024-10-09 13:58:32.250914] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:29:25.932 [2024-10-09 13:58:32.250942] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:29:25.932 [2024-10-09 13:58:32.250978] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:29:25.932 true 00:29:25.932 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.933 [2024-10-09 13:58:32.263094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72345 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72345 ']' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72345 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72345 00:29:25.933 killing process with pid 72345 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72345' 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72345 00:29:25.933 [2024-10-09 13:58:32.342714] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:25.933 [2024-10-09 13:58:32.342863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:25.933 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72345 00:29:25.933 [2024-10-09 13:58:32.343371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:25.933 [2024-10-09 13:58:32.343390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:29:25.933 [2024-10-09 13:58:32.344838] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:26.190 13:58:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:29:26.190 00:29:26.190 real 0m1.575s 00:29:26.190 user 0m1.827s 00:29:26.190 sys 0m0.399s 00:29:26.190 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:26.190 13:58:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.190 ************************************ 00:29:26.190 END TEST raid1_resize_test 00:29:26.190 ************************************ 00:29:26.190 13:58:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:29:26.190 13:58:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:26.190 13:58:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:29:26.190 13:58:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:26.190 13:58:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:26.190 13:58:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:26.190 ************************************ 00:29:26.190 START TEST raid_state_function_test 00:29:26.191 ************************************ 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:26.191 Process raid pid: 72402 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72402 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72402' 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72402 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72402 ']' 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:26.191 13:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.451 [2024-10-09 13:58:32.769297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:26.451 [2024-10-09 13:58:32.770102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.451 [2024-10-09 13:58:32.927929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.451 [2024-10-09 13:58:32.981598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.733 [2024-10-09 13:58:33.032554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:26.733 [2024-10-09 13:58:33.032627] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:27.300 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.300 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.301 [2024-10-09 13:58:33.781661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:27.301 [2024-10-09 13:58:33.781737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:27.301 [2024-10-09 13:58:33.781759] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:27.301 [2024-10-09 13:58:33.781784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.301 "name": "Existed_Raid", 00:29:27.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.301 "strip_size_kb": 64, 00:29:27.301 "state": "configuring", 00:29:27.301 "raid_level": "raid0", 00:29:27.301 "superblock": false, 00:29:27.301 "num_base_bdevs": 2, 00:29:27.301 "num_base_bdevs_discovered": 0, 00:29:27.301 "num_base_bdevs_operational": 2, 00:29:27.301 "base_bdevs_list": [ 00:29:27.301 { 00:29:27.301 "name": "BaseBdev1", 00:29:27.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.301 "is_configured": false, 00:29:27.301 "data_offset": 0, 00:29:27.301 "data_size": 0 00:29:27.301 }, 00:29:27.301 { 00:29:27.301 "name": "BaseBdev2", 00:29:27.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.301 "is_configured": false, 00:29:27.301 "data_offset": 0, 00:29:27.301 "data_size": 0 00:29:27.301 } 00:29:27.301 ] 00:29:27.301 }' 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.301 13:58:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 [2024-10-09 13:58:34.193636] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:27.868 [2024-10-09 13:58:34.193855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 [2024-10-09 13:58:34.201664] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:27.868 [2024-10-09 13:58:34.201858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:27.868 [2024-10-09 13:58:34.201967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:27.868 [2024-10-09 13:58:34.202021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 [2024-10-09 13:58:34.219653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:27.868 BaseBdev1 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 [ 00:29:27.868 { 00:29:27.868 "name": "BaseBdev1", 00:29:27.868 "aliases": [ 00:29:27.868 "459a47e4-5935-4c48-a0fb-c7ad40dd601c" 00:29:27.868 ], 00:29:27.868 "product_name": "Malloc disk", 00:29:27.868 "block_size": 512, 00:29:27.868 "num_blocks": 65536, 00:29:27.868 "uuid": "459a47e4-5935-4c48-a0fb-c7ad40dd601c", 00:29:27.868 "assigned_rate_limits": { 00:29:27.868 "rw_ios_per_sec": 0, 00:29:27.868 "rw_mbytes_per_sec": 0, 00:29:27.868 "r_mbytes_per_sec": 0, 00:29:27.868 "w_mbytes_per_sec": 0 00:29:27.868 }, 00:29:27.868 "claimed": true, 00:29:27.868 "claim_type": "exclusive_write", 00:29:27.868 "zoned": false, 00:29:27.868 "supported_io_types": { 00:29:27.868 "read": true, 00:29:27.868 "write": true, 00:29:27.868 "unmap": true, 00:29:27.868 "flush": true, 00:29:27.868 "reset": true, 00:29:27.868 "nvme_admin": false, 00:29:27.868 "nvme_io": false, 00:29:27.868 "nvme_io_md": false, 00:29:27.868 "write_zeroes": true, 00:29:27.868 "zcopy": true, 00:29:27.868 "get_zone_info": false, 00:29:27.868 "zone_management": false, 00:29:27.868 "zone_append": false, 00:29:27.868 "compare": false, 00:29:27.868 "compare_and_write": false, 00:29:27.868 "abort": true, 00:29:27.868 "seek_hole": false, 00:29:27.868 "seek_data": false, 00:29:27.868 "copy": true, 00:29:27.868 "nvme_iov_md": false 00:29:27.868 }, 00:29:27.868 "memory_domains": [ 00:29:27.868 { 00:29:27.868 "dma_device_id": "system", 00:29:27.868 "dma_device_type": 1 00:29:27.868 }, 00:29:27.868 { 00:29:27.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:27.868 "dma_device_type": 2 00:29:27.868 } 00:29:27.868 ], 00:29:27.868 "driver_specific": {} 00:29:27.868 } 00:29:27.868 ] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.868 "name": "Existed_Raid", 00:29:27.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.868 "strip_size_kb": 64, 00:29:27.868 "state": "configuring", 00:29:27.868 "raid_level": "raid0", 00:29:27.868 "superblock": false, 00:29:27.868 "num_base_bdevs": 2, 00:29:27.868 "num_base_bdevs_discovered": 1, 00:29:27.868 "num_base_bdevs_operational": 2, 00:29:27.868 "base_bdevs_list": [ 00:29:27.868 { 00:29:27.868 "name": "BaseBdev1", 00:29:27.868 "uuid": "459a47e4-5935-4c48-a0fb-c7ad40dd601c", 00:29:27.868 "is_configured": true, 00:29:27.868 "data_offset": 0, 00:29:27.868 "data_size": 65536 00:29:27.868 }, 00:29:27.868 { 00:29:27.868 "name": "BaseBdev2", 00:29:27.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.868 "is_configured": false, 00:29:27.868 "data_offset": 0, 00:29:27.868 "data_size": 0 00:29:27.868 } 00:29:27.868 ] 00:29:27.868 }' 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.868 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.436 [2024-10-09 13:58:34.703858] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:28.436 [2024-10-09 13:58:34.703926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.436 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.436 [2024-10-09 13:58:34.711911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.436 [2024-10-09 13:58:34.714382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:28.436 [2024-10-09 13:58:34.714597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.437 "name": "Existed_Raid", 00:29:28.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.437 "strip_size_kb": 64, 00:29:28.437 "state": "configuring", 00:29:28.437 "raid_level": "raid0", 00:29:28.437 "superblock": false, 00:29:28.437 "num_base_bdevs": 2, 00:29:28.437 "num_base_bdevs_discovered": 1, 00:29:28.437 "num_base_bdevs_operational": 2, 00:29:28.437 "base_bdevs_list": [ 00:29:28.437 { 00:29:28.437 "name": "BaseBdev1", 00:29:28.437 "uuid": "459a47e4-5935-4c48-a0fb-c7ad40dd601c", 00:29:28.437 "is_configured": true, 00:29:28.437 "data_offset": 0, 00:29:28.437 "data_size": 65536 00:29:28.437 }, 00:29:28.437 { 00:29:28.437 "name": "BaseBdev2", 00:29:28.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.437 "is_configured": false, 00:29:28.437 "data_offset": 0, 00:29:28.437 "data_size": 0 00:29:28.437 } 00:29:28.437 ] 00:29:28.437 }' 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.437 13:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.696 [2024-10-09 13:58:35.182213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:28.696 [2024-10-09 13:58:35.182264] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:28.696 [2024-10-09 13:58:35.182284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:28.696 [2024-10-09 13:58:35.182594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:28.696 [2024-10-09 13:58:35.182745] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:28.696 [2024-10-09 13:58:35.182763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:29:28.696 [2024-10-09 13:58:35.182987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.696 BaseBdev2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.696 [ 00:29:28.696 { 00:29:28.696 "name": "BaseBdev2", 00:29:28.696 "aliases": [ 00:29:28.696 "d4ef467d-0662-4f7e-83a0-440cb0fe4db5" 00:29:28.696 ], 00:29:28.696 "product_name": "Malloc disk", 00:29:28.696 "block_size": 512, 00:29:28.696 "num_blocks": 65536, 00:29:28.696 "uuid": "d4ef467d-0662-4f7e-83a0-440cb0fe4db5", 00:29:28.696 "assigned_rate_limits": { 00:29:28.696 "rw_ios_per_sec": 0, 00:29:28.696 "rw_mbytes_per_sec": 0, 00:29:28.696 "r_mbytes_per_sec": 0, 00:29:28.696 "w_mbytes_per_sec": 0 00:29:28.696 }, 00:29:28.696 "claimed": true, 00:29:28.696 "claim_type": "exclusive_write", 00:29:28.696 "zoned": false, 00:29:28.696 "supported_io_types": { 00:29:28.696 "read": true, 00:29:28.696 "write": true, 00:29:28.696 "unmap": true, 00:29:28.696 "flush": true, 00:29:28.696 "reset": true, 00:29:28.696 "nvme_admin": false, 00:29:28.696 "nvme_io": false, 00:29:28.696 "nvme_io_md": false, 00:29:28.696 "write_zeroes": true, 00:29:28.696 "zcopy": true, 00:29:28.696 "get_zone_info": false, 00:29:28.696 "zone_management": false, 00:29:28.696 "zone_append": false, 00:29:28.696 "compare": false, 00:29:28.696 "compare_and_write": false, 00:29:28.696 "abort": true, 00:29:28.696 "seek_hole": false, 00:29:28.696 "seek_data": false, 00:29:28.696 "copy": true, 00:29:28.696 "nvme_iov_md": false 00:29:28.696 }, 00:29:28.696 "memory_domains": [ 00:29:28.696 { 00:29:28.696 "dma_device_id": "system", 00:29:28.696 "dma_device_type": 1 00:29:28.696 }, 00:29:28.696 { 00:29:28.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.696 "dma_device_type": 2 00:29:28.696 } 00:29:28.696 ], 00:29:28.696 "driver_specific": {} 00:29:28.696 } 00:29:28.696 ] 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.696 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.955 "name": "Existed_Raid", 00:29:28.955 "uuid": "dec6ea57-1525-42e2-9408-f49d7e528f86", 00:29:28.955 "strip_size_kb": 64, 00:29:28.955 "state": "online", 00:29:28.955 "raid_level": "raid0", 00:29:28.955 "superblock": false, 00:29:28.955 "num_base_bdevs": 2, 00:29:28.955 "num_base_bdevs_discovered": 2, 00:29:28.955 "num_base_bdevs_operational": 2, 00:29:28.955 "base_bdevs_list": [ 00:29:28.955 { 00:29:28.955 "name": "BaseBdev1", 00:29:28.955 "uuid": "459a47e4-5935-4c48-a0fb-c7ad40dd601c", 00:29:28.955 "is_configured": true, 00:29:28.955 "data_offset": 0, 00:29:28.955 "data_size": 65536 00:29:28.955 }, 00:29:28.955 { 00:29:28.955 "name": "BaseBdev2", 00:29:28.955 "uuid": "d4ef467d-0662-4f7e-83a0-440cb0fe4db5", 00:29:28.955 "is_configured": true, 00:29:28.955 "data_offset": 0, 00:29:28.955 "data_size": 65536 00:29:28.955 } 00:29:28.955 ] 00:29:28.955 }' 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.955 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:29.214 [2024-10-09 13:58:35.694713] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.214 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:29.214 "name": "Existed_Raid", 00:29:29.214 "aliases": [ 00:29:29.214 "dec6ea57-1525-42e2-9408-f49d7e528f86" 00:29:29.214 ], 00:29:29.214 "product_name": "Raid Volume", 00:29:29.214 "block_size": 512, 00:29:29.214 "num_blocks": 131072, 00:29:29.214 "uuid": "dec6ea57-1525-42e2-9408-f49d7e528f86", 00:29:29.214 "assigned_rate_limits": { 00:29:29.214 "rw_ios_per_sec": 0, 00:29:29.214 "rw_mbytes_per_sec": 0, 00:29:29.214 "r_mbytes_per_sec": 0, 00:29:29.214 "w_mbytes_per_sec": 0 00:29:29.214 }, 00:29:29.214 "claimed": false, 00:29:29.214 "zoned": false, 00:29:29.214 "supported_io_types": { 00:29:29.214 "read": true, 00:29:29.214 "write": true, 00:29:29.214 "unmap": true, 00:29:29.214 "flush": true, 00:29:29.214 "reset": true, 00:29:29.214 "nvme_admin": false, 00:29:29.214 "nvme_io": false, 00:29:29.214 "nvme_io_md": false, 00:29:29.214 "write_zeroes": true, 00:29:29.214 "zcopy": false, 00:29:29.214 "get_zone_info": false, 00:29:29.214 "zone_management": false, 00:29:29.214 "zone_append": false, 00:29:29.214 "compare": false, 00:29:29.214 "compare_and_write": false, 00:29:29.214 "abort": false, 00:29:29.214 "seek_hole": false, 00:29:29.214 "seek_data": false, 00:29:29.214 "copy": false, 00:29:29.214 "nvme_iov_md": false 00:29:29.214 }, 00:29:29.214 "memory_domains": [ 00:29:29.214 { 00:29:29.214 "dma_device_id": "system", 00:29:29.214 "dma_device_type": 1 00:29:29.214 }, 00:29:29.214 { 00:29:29.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.214 "dma_device_type": 2 00:29:29.214 }, 00:29:29.214 { 00:29:29.214 "dma_device_id": "system", 00:29:29.214 "dma_device_type": 1 00:29:29.214 }, 00:29:29.214 { 00:29:29.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.215 "dma_device_type": 2 00:29:29.215 } 00:29:29.215 ], 00:29:29.215 "driver_specific": { 00:29:29.215 "raid": { 00:29:29.215 "uuid": "dec6ea57-1525-42e2-9408-f49d7e528f86", 00:29:29.215 "strip_size_kb": 64, 00:29:29.215 "state": "online", 00:29:29.215 "raid_level": "raid0", 00:29:29.215 "superblock": false, 00:29:29.215 "num_base_bdevs": 2, 00:29:29.215 "num_base_bdevs_discovered": 2, 00:29:29.215 "num_base_bdevs_operational": 2, 00:29:29.215 "base_bdevs_list": [ 00:29:29.215 { 00:29:29.215 "name": "BaseBdev1", 00:29:29.215 "uuid": "459a47e4-5935-4c48-a0fb-c7ad40dd601c", 00:29:29.215 "is_configured": true, 00:29:29.215 "data_offset": 0, 00:29:29.215 "data_size": 65536 00:29:29.215 }, 00:29:29.215 { 00:29:29.215 "name": "BaseBdev2", 00:29:29.215 "uuid": "d4ef467d-0662-4f7e-83a0-440cb0fe4db5", 00:29:29.215 "is_configured": true, 00:29:29.215 "data_offset": 0, 00:29:29.215 "data_size": 65536 00:29:29.215 } 00:29:29.215 ] 00:29:29.215 } 00:29:29.215 } 00:29:29.215 }' 00:29:29.215 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:29.474 BaseBdev2' 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.474 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.475 [2024-10-09 13:58:35.922508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:29.475 [2024-10-09 13:58:35.922702] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:29.475 [2024-10-09 13:58:35.922789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:29.475 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:29.476 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.476 "name": "Existed_Raid", 00:29:29.476 "uuid": "dec6ea57-1525-42e2-9408-f49d7e528f86", 00:29:29.476 "strip_size_kb": 64, 00:29:29.476 "state": "offline", 00:29:29.476 "raid_level": "raid0", 00:29:29.476 "superblock": false, 00:29:29.476 "num_base_bdevs": 2, 00:29:29.476 "num_base_bdevs_discovered": 1, 00:29:29.476 "num_base_bdevs_operational": 1, 00:29:29.476 "base_bdevs_list": [ 00:29:29.476 { 00:29:29.476 "name": null, 00:29:29.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.477 "is_configured": false, 00:29:29.477 "data_offset": 0, 00:29:29.477 "data_size": 65536 00:29:29.477 }, 00:29:29.477 { 00:29:29.477 "name": "BaseBdev2", 00:29:29.477 "uuid": "d4ef467d-0662-4f7e-83a0-440cb0fe4db5", 00:29:29.477 "is_configured": true, 00:29:29.477 "data_offset": 0, 00:29:29.477 "data_size": 65536 00:29:29.477 } 00:29:29.477 ] 00:29:29.477 }' 00:29:29.477 13:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.477 13:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.049 [2024-10-09 13:58:36.443433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:30.049 [2024-10-09 13:58:36.443499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72402 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72402 ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72402 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72402 00:29:30.049 killing process with pid 72402 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72402' 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72402 00:29:30.049 [2024-10-09 13:58:36.550781] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:30.049 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72402 00:29:30.049 [2024-10-09 13:58:36.552048] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:30.308 ************************************ 00:29:30.308 END TEST raid_state_function_test 00:29:30.308 ************************************ 00:29:30.308 13:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:30.308 00:29:30.308 real 0m4.164s 00:29:30.308 user 0m6.559s 00:29:30.308 sys 0m0.845s 00:29:30.308 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:30.308 13:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:30.567 13:58:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:29:30.567 13:58:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:30.567 13:58:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:30.567 13:58:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:30.567 ************************************ 00:29:30.567 START TEST raid_state_function_test_sb 00:29:30.567 ************************************ 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:30.567 Process raid pid: 72644 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72644 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72644' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72644 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72644 ']' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:30.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.567 13:58:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.567 [2024-10-09 13:58:37.027035] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:30.567 [2024-10-09 13:58:37.027237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.826 [2024-10-09 13:58:37.202243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.826 [2024-10-09 13:58:37.247760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.826 [2024-10-09 13:58:37.292306] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:30.826 [2024-10-09 13:58:37.292352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.761 [2024-10-09 13:58:37.955708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:31.761 [2024-10-09 13:58:37.955999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:31.761 [2024-10-09 13:58:37.956040] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:31.761 [2024-10-09 13:58:37.956061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:31.761 13:58:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.761 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:31.761 "name": "Existed_Raid", 00:29:31.761 "uuid": "33d5a205-1b75-4337-ac1b-41e38adebe94", 00:29:31.761 "strip_size_kb": 64, 00:29:31.761 "state": "configuring", 00:29:31.761 "raid_level": "raid0", 00:29:31.761 "superblock": true, 00:29:31.761 "num_base_bdevs": 2, 00:29:31.761 "num_base_bdevs_discovered": 0, 00:29:31.761 "num_base_bdevs_operational": 2, 00:29:31.761 "base_bdevs_list": [ 00:29:31.761 { 00:29:31.761 "name": "BaseBdev1", 00:29:31.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.761 "is_configured": false, 00:29:31.761 "data_offset": 0, 00:29:31.761 "data_size": 0 00:29:31.761 }, 00:29:31.761 { 00:29:31.761 "name": "BaseBdev2", 00:29:31.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.761 "is_configured": false, 00:29:31.761 "data_offset": 0, 00:29:31.761 "data_size": 0 00:29:31.761 } 00:29:31.761 ] 00:29:31.761 }' 00:29:31.761 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:31.761 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 [2024-10-09 13:58:38.383705] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:32.021 [2024-10-09 13:58:38.383922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 [2024-10-09 13:58:38.395720] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:32.021 [2024-10-09 13:58:38.395882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:32.021 [2024-10-09 13:58:38.395989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:32.021 [2024-10-09 13:58:38.396047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 [2024-10-09 13:58:38.413117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:32.021 BaseBdev1 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 [ 00:29:32.021 { 00:29:32.021 "name": "BaseBdev1", 00:29:32.021 "aliases": [ 00:29:32.021 "3cd15759-87ef-4eb4-ab3e-a76cf502db22" 00:29:32.021 ], 00:29:32.021 "product_name": "Malloc disk", 00:29:32.021 "block_size": 512, 00:29:32.021 "num_blocks": 65536, 00:29:32.021 "uuid": "3cd15759-87ef-4eb4-ab3e-a76cf502db22", 00:29:32.021 "assigned_rate_limits": { 00:29:32.021 "rw_ios_per_sec": 0, 00:29:32.021 "rw_mbytes_per_sec": 0, 00:29:32.021 "r_mbytes_per_sec": 0, 00:29:32.021 "w_mbytes_per_sec": 0 00:29:32.021 }, 00:29:32.021 "claimed": true, 00:29:32.021 "claim_type": "exclusive_write", 00:29:32.021 "zoned": false, 00:29:32.021 "supported_io_types": { 00:29:32.021 "read": true, 00:29:32.021 "write": true, 00:29:32.021 "unmap": true, 00:29:32.021 "flush": true, 00:29:32.021 "reset": true, 00:29:32.021 "nvme_admin": false, 00:29:32.021 "nvme_io": false, 00:29:32.021 "nvme_io_md": false, 00:29:32.021 "write_zeroes": true, 00:29:32.021 "zcopy": true, 00:29:32.021 "get_zone_info": false, 00:29:32.021 "zone_management": false, 00:29:32.021 "zone_append": false, 00:29:32.021 "compare": false, 00:29:32.021 "compare_and_write": false, 00:29:32.021 "abort": true, 00:29:32.021 "seek_hole": false, 00:29:32.021 "seek_data": false, 00:29:32.021 "copy": true, 00:29:32.021 "nvme_iov_md": false 00:29:32.021 }, 00:29:32.021 "memory_domains": [ 00:29:32.021 { 00:29:32.021 "dma_device_id": "system", 00:29:32.021 "dma_device_type": 1 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.021 "dma_device_type": 2 00:29:32.021 } 00:29:32.021 ], 00:29:32.021 "driver_specific": {} 00:29:32.021 } 00:29:32.021 ] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.021 "name": "Existed_Raid", 00:29:32.021 "uuid": "050c2538-ce27-4bf8-961b-e19c8667f44f", 00:29:32.021 "strip_size_kb": 64, 00:29:32.021 "state": "configuring", 00:29:32.021 "raid_level": "raid0", 00:29:32.021 "superblock": true, 00:29:32.021 "num_base_bdevs": 2, 00:29:32.021 "num_base_bdevs_discovered": 1, 00:29:32.021 "num_base_bdevs_operational": 2, 00:29:32.021 "base_bdevs_list": [ 00:29:32.021 { 00:29:32.021 "name": "BaseBdev1", 00:29:32.021 "uuid": "3cd15759-87ef-4eb4-ab3e-a76cf502db22", 00:29:32.021 "is_configured": true, 00:29:32.021 "data_offset": 2048, 00:29:32.021 "data_size": 63488 00:29:32.021 }, 00:29:32.021 { 00:29:32.021 "name": "BaseBdev2", 00:29:32.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.021 "is_configured": false, 00:29:32.021 "data_offset": 0, 00:29:32.021 "data_size": 0 00:29:32.021 } 00:29:32.021 ] 00:29:32.021 }' 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.021 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.589 [2024-10-09 13:58:38.889269] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:32.589 [2024-10-09 13:58:38.889328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.589 [2024-10-09 13:58:38.897316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:32.589 [2024-10-09 13:58:38.899778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:32.589 [2024-10-09 13:58:38.899908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.589 "name": "Existed_Raid", 00:29:32.589 "uuid": "16cb2558-2bba-40fc-953a-57edf5b8a1c3", 00:29:32.589 "strip_size_kb": 64, 00:29:32.589 "state": "configuring", 00:29:32.589 "raid_level": "raid0", 00:29:32.589 "superblock": true, 00:29:32.589 "num_base_bdevs": 2, 00:29:32.589 "num_base_bdevs_discovered": 1, 00:29:32.589 "num_base_bdevs_operational": 2, 00:29:32.589 "base_bdevs_list": [ 00:29:32.589 { 00:29:32.589 "name": "BaseBdev1", 00:29:32.589 "uuid": "3cd15759-87ef-4eb4-ab3e-a76cf502db22", 00:29:32.589 "is_configured": true, 00:29:32.589 "data_offset": 2048, 00:29:32.589 "data_size": 63488 00:29:32.589 }, 00:29:32.589 { 00:29:32.589 "name": "BaseBdev2", 00:29:32.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.589 "is_configured": false, 00:29:32.589 "data_offset": 0, 00:29:32.589 "data_size": 0 00:29:32.589 } 00:29:32.589 ] 00:29:32.589 }' 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.589 13:58:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.858 [2024-10-09 13:58:39.357077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:32.858 [2024-10-09 13:58:39.357280] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:32.858 [2024-10-09 13:58:39.357298] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:32.858 [2024-10-09 13:58:39.357658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:32.858 BaseBdev2 00:29:32.858 [2024-10-09 13:58:39.357812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:32.858 [2024-10-09 13:58:39.357836] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:29:32.858 [2024-10-09 13:58:39.357955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.858 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.858 [ 00:29:32.858 { 00:29:32.858 "name": "BaseBdev2", 00:29:32.858 "aliases": [ 00:29:32.858 "ce3591bc-ac23-4137-9363-e6ebff3680b8" 00:29:32.858 ], 00:29:32.858 "product_name": "Malloc disk", 00:29:32.858 "block_size": 512, 00:29:32.858 "num_blocks": 65536, 00:29:32.858 "uuid": "ce3591bc-ac23-4137-9363-e6ebff3680b8", 00:29:32.858 "assigned_rate_limits": { 00:29:32.858 "rw_ios_per_sec": 0, 00:29:32.858 "rw_mbytes_per_sec": 0, 00:29:32.858 "r_mbytes_per_sec": 0, 00:29:32.858 "w_mbytes_per_sec": 0 00:29:32.858 }, 00:29:32.858 "claimed": true, 00:29:32.858 "claim_type": "exclusive_write", 00:29:32.858 "zoned": false, 00:29:32.858 "supported_io_types": { 00:29:32.858 "read": true, 00:29:32.858 "write": true, 00:29:32.858 "unmap": true, 00:29:32.858 "flush": true, 00:29:32.858 "reset": true, 00:29:32.858 "nvme_admin": false, 00:29:32.858 "nvme_io": false, 00:29:32.858 "nvme_io_md": false, 00:29:32.858 "write_zeroes": true, 00:29:32.858 "zcopy": true, 00:29:32.858 "get_zone_info": false, 00:29:32.858 "zone_management": false, 00:29:32.859 "zone_append": false, 00:29:32.859 "compare": false, 00:29:32.859 "compare_and_write": false, 00:29:32.859 "abort": true, 00:29:32.859 "seek_hole": false, 00:29:32.859 "seek_data": false, 00:29:32.859 "copy": true, 00:29:32.859 "nvme_iov_md": false 00:29:32.859 }, 00:29:32.859 "memory_domains": [ 00:29:32.859 { 00:29:32.859 "dma_device_id": "system", 00:29:32.859 "dma_device_type": 1 00:29:32.859 }, 00:29:32.859 { 00:29:32.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.859 "dma_device_type": 2 00:29:32.859 } 00:29:32.859 ], 00:29:32.859 "driver_specific": {} 00:29:32.859 } 00:29:32.859 ] 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:32.859 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.121 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.121 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.121 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:33.121 "name": "Existed_Raid", 00:29:33.121 "uuid": "16cb2558-2bba-40fc-953a-57edf5b8a1c3", 00:29:33.121 "strip_size_kb": 64, 00:29:33.121 "state": "online", 00:29:33.121 "raid_level": "raid0", 00:29:33.121 "superblock": true, 00:29:33.121 "num_base_bdevs": 2, 00:29:33.121 "num_base_bdevs_discovered": 2, 00:29:33.121 "num_base_bdevs_operational": 2, 00:29:33.121 "base_bdevs_list": [ 00:29:33.121 { 00:29:33.121 "name": "BaseBdev1", 00:29:33.121 "uuid": "3cd15759-87ef-4eb4-ab3e-a76cf502db22", 00:29:33.121 "is_configured": true, 00:29:33.121 "data_offset": 2048, 00:29:33.121 "data_size": 63488 00:29:33.121 }, 00:29:33.121 { 00:29:33.121 "name": "BaseBdev2", 00:29:33.121 "uuid": "ce3591bc-ac23-4137-9363-e6ebff3680b8", 00:29:33.121 "is_configured": true, 00:29:33.121 "data_offset": 2048, 00:29:33.121 "data_size": 63488 00:29:33.121 } 00:29:33.121 ] 00:29:33.121 }' 00:29:33.121 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:33.121 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:33.447 [2024-10-09 13:58:39.805528] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:33.447 "name": "Existed_Raid", 00:29:33.447 "aliases": [ 00:29:33.447 "16cb2558-2bba-40fc-953a-57edf5b8a1c3" 00:29:33.447 ], 00:29:33.447 "product_name": "Raid Volume", 00:29:33.447 "block_size": 512, 00:29:33.447 "num_blocks": 126976, 00:29:33.447 "uuid": "16cb2558-2bba-40fc-953a-57edf5b8a1c3", 00:29:33.447 "assigned_rate_limits": { 00:29:33.447 "rw_ios_per_sec": 0, 00:29:33.447 "rw_mbytes_per_sec": 0, 00:29:33.447 "r_mbytes_per_sec": 0, 00:29:33.447 "w_mbytes_per_sec": 0 00:29:33.447 }, 00:29:33.447 "claimed": false, 00:29:33.447 "zoned": false, 00:29:33.447 "supported_io_types": { 00:29:33.447 "read": true, 00:29:33.447 "write": true, 00:29:33.447 "unmap": true, 00:29:33.447 "flush": true, 00:29:33.447 "reset": true, 00:29:33.447 "nvme_admin": false, 00:29:33.447 "nvme_io": false, 00:29:33.447 "nvme_io_md": false, 00:29:33.447 "write_zeroes": true, 00:29:33.447 "zcopy": false, 00:29:33.447 "get_zone_info": false, 00:29:33.447 "zone_management": false, 00:29:33.447 "zone_append": false, 00:29:33.447 "compare": false, 00:29:33.447 "compare_and_write": false, 00:29:33.447 "abort": false, 00:29:33.447 "seek_hole": false, 00:29:33.447 "seek_data": false, 00:29:33.447 "copy": false, 00:29:33.447 "nvme_iov_md": false 00:29:33.447 }, 00:29:33.447 "memory_domains": [ 00:29:33.447 { 00:29:33.447 "dma_device_id": "system", 00:29:33.447 "dma_device_type": 1 00:29:33.447 }, 00:29:33.447 { 00:29:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:33.447 "dma_device_type": 2 00:29:33.447 }, 00:29:33.447 { 00:29:33.447 "dma_device_id": "system", 00:29:33.447 "dma_device_type": 1 00:29:33.447 }, 00:29:33.447 { 00:29:33.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:33.447 "dma_device_type": 2 00:29:33.447 } 00:29:33.447 ], 00:29:33.447 "driver_specific": { 00:29:33.447 "raid": { 00:29:33.447 "uuid": "16cb2558-2bba-40fc-953a-57edf5b8a1c3", 00:29:33.447 "strip_size_kb": 64, 00:29:33.447 "state": "online", 00:29:33.447 "raid_level": "raid0", 00:29:33.447 "superblock": true, 00:29:33.447 "num_base_bdevs": 2, 00:29:33.447 "num_base_bdevs_discovered": 2, 00:29:33.447 "num_base_bdevs_operational": 2, 00:29:33.447 "base_bdevs_list": [ 00:29:33.447 { 00:29:33.447 "name": "BaseBdev1", 00:29:33.447 "uuid": "3cd15759-87ef-4eb4-ab3e-a76cf502db22", 00:29:33.447 "is_configured": true, 00:29:33.447 "data_offset": 2048, 00:29:33.447 "data_size": 63488 00:29:33.447 }, 00:29:33.447 { 00:29:33.447 "name": "BaseBdev2", 00:29:33.447 "uuid": "ce3591bc-ac23-4137-9363-e6ebff3680b8", 00:29:33.447 "is_configured": true, 00:29:33.447 "data_offset": 2048, 00:29:33.447 "data_size": 63488 00:29:33.447 } 00:29:33.447 ] 00:29:33.447 } 00:29:33.447 } 00:29:33.447 }' 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:33.447 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:33.447 BaseBdev2' 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.448 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.731 [2024-10-09 13:58:39.985336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:33.731 [2024-10-09 13:58:39.985375] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:33.731 [2024-10-09 13:58:39.985428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:33.731 13:58:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:33.731 "name": "Existed_Raid", 00:29:33.731 "uuid": "16cb2558-2bba-40fc-953a-57edf5b8a1c3", 00:29:33.731 "strip_size_kb": 64, 00:29:33.731 "state": "offline", 00:29:33.731 "raid_level": "raid0", 00:29:33.731 "superblock": true, 00:29:33.731 "num_base_bdevs": 2, 00:29:33.731 "num_base_bdevs_discovered": 1, 00:29:33.731 "num_base_bdevs_operational": 1, 00:29:33.731 "base_bdevs_list": [ 00:29:33.731 { 00:29:33.731 "name": null, 00:29:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:33.731 "is_configured": false, 00:29:33.731 "data_offset": 0, 00:29:33.731 "data_size": 63488 00:29:33.731 }, 00:29:33.731 { 00:29:33.731 "name": "BaseBdev2", 00:29:33.731 "uuid": "ce3591bc-ac23-4137-9363-e6ebff3680b8", 00:29:33.731 "is_configured": true, 00:29:33.731 "data_offset": 2048, 00:29:33.731 "data_size": 63488 00:29:33.731 } 00:29:33.731 ] 00:29:33.731 }' 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:33.731 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.990 [2024-10-09 13:58:40.513669] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:33.990 [2024-10-09 13:58:40.513763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.990 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72644 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72644 ']' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72644 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72644 00:29:34.248 killing process with pid 72644 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72644' 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72644 00:29:34.248 [2024-10-09 13:58:40.620049] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:34.248 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72644 00:29:34.248 [2024-10-09 13:58:40.621165] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:34.507 ************************************ 00:29:34.507 END TEST raid_state_function_test_sb 00:29:34.507 ************************************ 00:29:34.507 13:58:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:34.507 00:29:34.507 real 0m3.960s 00:29:34.507 user 0m6.185s 00:29:34.507 sys 0m0.875s 00:29:34.507 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:34.507 13:58:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.508 13:58:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:29:34.508 13:58:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:34.508 13:58:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:34.508 13:58:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:34.508 ************************************ 00:29:34.508 START TEST raid_superblock_test 00:29:34.508 ************************************ 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:34.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72885 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72885 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72885 ']' 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:34.508 13:58:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:34.508 [2024-10-09 13:58:41.050103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:34.508 [2024-10-09 13:58:41.050319] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72885 ] 00:29:34.767 [2024-10-09 13:58:41.228930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.767 [2024-10-09 13:58:41.274308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.025 [2024-10-09 13:58:41.317540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:35.025 [2024-10-09 13:58:41.317807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:35.592 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 malloc1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 [2024-10-09 13:58:42.042049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:35.593 [2024-10-09 13:58:42.042122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.593 [2024-10-09 13:58:42.042151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:35.593 [2024-10-09 13:58:42.042170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.593 [2024-10-09 13:58:42.044704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.593 [2024-10-09 13:58:42.044874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:35.593 pt1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 malloc2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 [2024-10-09 13:58:42.076084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:35.593 [2024-10-09 13:58:42.076151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.593 [2024-10-09 13:58:42.076174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:35.593 [2024-10-09 13:58:42.076192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.593 [2024-10-09 13:58:42.079013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.593 [2024-10-09 13:58:42.079180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:35.593 pt2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 [2024-10-09 13:58:42.088126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:35.593 [2024-10-09 13:58:42.090414] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:35.593 [2024-10-09 13:58:42.090702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:35.593 [2024-10-09 13:58:42.090727] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:35.593 [2024-10-09 13:58:42.091008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:35.593 [2024-10-09 13:58:42.091135] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:35.593 [2024-10-09 13:58:42.091145] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:29:35.593 [2024-10-09 13:58:42.091261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.593 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.851 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.851 "name": "raid_bdev1", 00:29:35.851 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:35.851 "strip_size_kb": 64, 00:29:35.851 "state": "online", 00:29:35.851 "raid_level": "raid0", 00:29:35.851 "superblock": true, 00:29:35.851 "num_base_bdevs": 2, 00:29:35.851 "num_base_bdevs_discovered": 2, 00:29:35.851 "num_base_bdevs_operational": 2, 00:29:35.851 "base_bdevs_list": [ 00:29:35.851 { 00:29:35.851 "name": "pt1", 00:29:35.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:35.851 "is_configured": true, 00:29:35.851 "data_offset": 2048, 00:29:35.851 "data_size": 63488 00:29:35.851 }, 00:29:35.851 { 00:29:35.851 "name": "pt2", 00:29:35.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:35.851 "is_configured": true, 00:29:35.851 "data_offset": 2048, 00:29:35.851 "data_size": 63488 00:29:35.851 } 00:29:35.851 ] 00:29:35.851 }' 00:29:35.851 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.851 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:36.110 [2024-10-09 13:58:42.536473] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.110 "name": "raid_bdev1", 00:29:36.110 "aliases": [ 00:29:36.110 "5c034716-89b0-4901-a12a-7fdeda09fdc9" 00:29:36.110 ], 00:29:36.110 "product_name": "Raid Volume", 00:29:36.110 "block_size": 512, 00:29:36.110 "num_blocks": 126976, 00:29:36.110 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:36.110 "assigned_rate_limits": { 00:29:36.110 "rw_ios_per_sec": 0, 00:29:36.110 "rw_mbytes_per_sec": 0, 00:29:36.110 "r_mbytes_per_sec": 0, 00:29:36.110 "w_mbytes_per_sec": 0 00:29:36.110 }, 00:29:36.110 "claimed": false, 00:29:36.110 "zoned": false, 00:29:36.110 "supported_io_types": { 00:29:36.110 "read": true, 00:29:36.110 "write": true, 00:29:36.110 "unmap": true, 00:29:36.110 "flush": true, 00:29:36.110 "reset": true, 00:29:36.110 "nvme_admin": false, 00:29:36.110 "nvme_io": false, 00:29:36.110 "nvme_io_md": false, 00:29:36.110 "write_zeroes": true, 00:29:36.110 "zcopy": false, 00:29:36.110 "get_zone_info": false, 00:29:36.110 "zone_management": false, 00:29:36.110 "zone_append": false, 00:29:36.110 "compare": false, 00:29:36.110 "compare_and_write": false, 00:29:36.110 "abort": false, 00:29:36.110 "seek_hole": false, 00:29:36.110 "seek_data": false, 00:29:36.110 "copy": false, 00:29:36.110 "nvme_iov_md": false 00:29:36.110 }, 00:29:36.110 "memory_domains": [ 00:29:36.110 { 00:29:36.110 "dma_device_id": "system", 00:29:36.110 "dma_device_type": 1 00:29:36.110 }, 00:29:36.110 { 00:29:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.110 "dma_device_type": 2 00:29:36.110 }, 00:29:36.110 { 00:29:36.110 "dma_device_id": "system", 00:29:36.110 "dma_device_type": 1 00:29:36.110 }, 00:29:36.110 { 00:29:36.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.110 "dma_device_type": 2 00:29:36.110 } 00:29:36.110 ], 00:29:36.110 "driver_specific": { 00:29:36.110 "raid": { 00:29:36.110 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:36.110 "strip_size_kb": 64, 00:29:36.110 "state": "online", 00:29:36.110 "raid_level": "raid0", 00:29:36.110 "superblock": true, 00:29:36.110 "num_base_bdevs": 2, 00:29:36.110 "num_base_bdevs_discovered": 2, 00:29:36.110 "num_base_bdevs_operational": 2, 00:29:36.110 "base_bdevs_list": [ 00:29:36.110 { 00:29:36.110 "name": "pt1", 00:29:36.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:36.110 "is_configured": true, 00:29:36.110 "data_offset": 2048, 00:29:36.110 "data_size": 63488 00:29:36.110 }, 00:29:36.110 { 00:29:36.110 "name": "pt2", 00:29:36.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.110 "is_configured": true, 00:29:36.110 "data_offset": 2048, 00:29:36.110 "data_size": 63488 00:29:36.110 } 00:29:36.110 ] 00:29:36.110 } 00:29:36.110 } 00:29:36.110 }' 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:36.110 pt2' 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:36.110 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.369 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 [2024-10-09 13:58:42.752452] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5c034716-89b0-4901-a12a-7fdeda09fdc9 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5c034716-89b0-4901-a12a-7fdeda09fdc9 ']' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 [2024-10-09 13:58:42.804244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:36.370 [2024-10-09 13:58:42.804294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:36.370 [2024-10-09 13:58:42.804388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:36.370 [2024-10-09 13:58:42.804442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:36.370 [2024-10-09 13:58:42.804461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:36.370 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:36.629 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.630 [2024-10-09 13:58:42.928291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:36.630 [2024-10-09 13:58:42.930636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:36.630 [2024-10-09 13:58:42.930847] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:36.630 [2024-10-09 13:58:42.930921] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:36.630 [2024-10-09 13:58:42.930942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:36.630 [2024-10-09 13:58:42.930952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:29:36.630 request: 00:29:36.630 { 00:29:36.630 "name": "raid_bdev1", 00:29:36.630 "raid_level": "raid0", 00:29:36.630 "base_bdevs": [ 00:29:36.630 "malloc1", 00:29:36.630 "malloc2" 00:29:36.630 ], 00:29:36.630 "strip_size_kb": 64, 00:29:36.630 "superblock": false, 00:29:36.630 "method": "bdev_raid_create", 00:29:36.630 "req_id": 1 00:29:36.630 } 00:29:36.630 Got JSON-RPC error response 00:29:36.630 response: 00:29:36.630 { 00:29:36.630 "code": -17, 00:29:36.630 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:36.630 } 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.630 [2024-10-09 13:58:42.988268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:36.630 [2024-10-09 13:58:42.988439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.630 [2024-10-09 13:58:42.988498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:36.630 [2024-10-09 13:58:42.988589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.630 [2024-10-09 13:58:42.991231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.630 [2024-10-09 13:58:42.991364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:36.630 [2024-10-09 13:58:42.991525] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:36.630 [2024-10-09 13:58:42.991670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:36.630 pt1 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.630 13:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:36.630 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.630 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:36.630 "name": "raid_bdev1", 00:29:36.630 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:36.630 "strip_size_kb": 64, 00:29:36.630 "state": "configuring", 00:29:36.630 "raid_level": "raid0", 00:29:36.630 "superblock": true, 00:29:36.630 "num_base_bdevs": 2, 00:29:36.630 "num_base_bdevs_discovered": 1, 00:29:36.630 "num_base_bdevs_operational": 2, 00:29:36.630 "base_bdevs_list": [ 00:29:36.630 { 00:29:36.630 "name": "pt1", 00:29:36.630 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:36.630 "is_configured": true, 00:29:36.630 "data_offset": 2048, 00:29:36.630 "data_size": 63488 00:29:36.630 }, 00:29:36.630 { 00:29:36.630 "name": null, 00:29:36.630 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:36.630 "is_configured": false, 00:29:36.630 "data_offset": 2048, 00:29:36.630 "data_size": 63488 00:29:36.630 } 00:29:36.630 ] 00:29:36.630 }' 00:29:36.630 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:36.630 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.198 [2024-10-09 13:58:43.452372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:37.198 [2024-10-09 13:58:43.452446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:37.198 [2024-10-09 13:58:43.452473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:37.198 [2024-10-09 13:58:43.452486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:37.198 [2024-10-09 13:58:43.452940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:37.198 [2024-10-09 13:58:43.452960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:37.198 [2024-10-09 13:58:43.453040] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:37.198 [2024-10-09 13:58:43.453062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:37.198 [2024-10-09 13:58:43.453149] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:37.198 [2024-10-09 13:58:43.453159] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:37.198 [2024-10-09 13:58:43.453401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:37.198 [2024-10-09 13:58:43.453504] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:37.198 [2024-10-09 13:58:43.453520] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:29:37.198 [2024-10-09 13:58:43.453648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:37.198 pt2 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:37.198 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:37.199 "name": "raid_bdev1", 00:29:37.199 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:37.199 "strip_size_kb": 64, 00:29:37.199 "state": "online", 00:29:37.199 "raid_level": "raid0", 00:29:37.199 "superblock": true, 00:29:37.199 "num_base_bdevs": 2, 00:29:37.199 "num_base_bdevs_discovered": 2, 00:29:37.199 "num_base_bdevs_operational": 2, 00:29:37.199 "base_bdevs_list": [ 00:29:37.199 { 00:29:37.199 "name": "pt1", 00:29:37.199 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:37.199 "is_configured": true, 00:29:37.199 "data_offset": 2048, 00:29:37.199 "data_size": 63488 00:29:37.199 }, 00:29:37.199 { 00:29:37.199 "name": "pt2", 00:29:37.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.199 "is_configured": true, 00:29:37.199 "data_offset": 2048, 00:29:37.199 "data_size": 63488 00:29:37.199 } 00:29:37.199 ] 00:29:37.199 }' 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:37.199 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.458 [2024-10-09 13:58:43.908738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.458 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.458 "name": "raid_bdev1", 00:29:37.458 "aliases": [ 00:29:37.458 "5c034716-89b0-4901-a12a-7fdeda09fdc9" 00:29:37.458 ], 00:29:37.458 "product_name": "Raid Volume", 00:29:37.458 "block_size": 512, 00:29:37.458 "num_blocks": 126976, 00:29:37.458 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:37.458 "assigned_rate_limits": { 00:29:37.458 "rw_ios_per_sec": 0, 00:29:37.458 "rw_mbytes_per_sec": 0, 00:29:37.458 "r_mbytes_per_sec": 0, 00:29:37.458 "w_mbytes_per_sec": 0 00:29:37.458 }, 00:29:37.458 "claimed": false, 00:29:37.458 "zoned": false, 00:29:37.458 "supported_io_types": { 00:29:37.458 "read": true, 00:29:37.458 "write": true, 00:29:37.458 "unmap": true, 00:29:37.458 "flush": true, 00:29:37.458 "reset": true, 00:29:37.458 "nvme_admin": false, 00:29:37.458 "nvme_io": false, 00:29:37.458 "nvme_io_md": false, 00:29:37.458 "write_zeroes": true, 00:29:37.458 "zcopy": false, 00:29:37.458 "get_zone_info": false, 00:29:37.458 "zone_management": false, 00:29:37.458 "zone_append": false, 00:29:37.458 "compare": false, 00:29:37.458 "compare_and_write": false, 00:29:37.458 "abort": false, 00:29:37.458 "seek_hole": false, 00:29:37.458 "seek_data": false, 00:29:37.458 "copy": false, 00:29:37.458 "nvme_iov_md": false 00:29:37.458 }, 00:29:37.458 "memory_domains": [ 00:29:37.458 { 00:29:37.458 "dma_device_id": "system", 00:29:37.458 "dma_device_type": 1 00:29:37.458 }, 00:29:37.458 { 00:29:37.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:37.458 "dma_device_type": 2 00:29:37.458 }, 00:29:37.458 { 00:29:37.458 "dma_device_id": "system", 00:29:37.458 "dma_device_type": 1 00:29:37.458 }, 00:29:37.458 { 00:29:37.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:37.458 "dma_device_type": 2 00:29:37.458 } 00:29:37.458 ], 00:29:37.458 "driver_specific": { 00:29:37.458 "raid": { 00:29:37.458 "uuid": "5c034716-89b0-4901-a12a-7fdeda09fdc9", 00:29:37.458 "strip_size_kb": 64, 00:29:37.458 "state": "online", 00:29:37.458 "raid_level": "raid0", 00:29:37.458 "superblock": true, 00:29:37.458 "num_base_bdevs": 2, 00:29:37.458 "num_base_bdevs_discovered": 2, 00:29:37.458 "num_base_bdevs_operational": 2, 00:29:37.458 "base_bdevs_list": [ 00:29:37.458 { 00:29:37.458 "name": "pt1", 00:29:37.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:37.459 "is_configured": true, 00:29:37.459 "data_offset": 2048, 00:29:37.459 "data_size": 63488 00:29:37.459 }, 00:29:37.459 { 00:29:37.459 "name": "pt2", 00:29:37.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:37.459 "is_configured": true, 00:29:37.459 "data_offset": 2048, 00:29:37.459 "data_size": 63488 00:29:37.459 } 00:29:37.459 ] 00:29:37.459 } 00:29:37.459 } 00:29:37.459 }' 00:29:37.459 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:37.459 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:37.459 pt2' 00:29:37.459 13:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.717 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:37.718 [2024-10-09 13:58:44.136764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5c034716-89b0-4901-a12a-7fdeda09fdc9 '!=' 5c034716-89b0-4901-a12a-7fdeda09fdc9 ']' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72885 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72885 ']' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72885 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72885 00:29:37.718 killing process with pid 72885 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72885' 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72885 00:29:37.718 [2024-10-09 13:58:44.222639] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:37.718 [2024-10-09 13:58:44.222716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:37.718 [2024-10-09 13:58:44.222766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:37.718 [2024-10-09 13:58:44.222777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:29:37.718 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72885 00:29:37.718 [2024-10-09 13:58:44.247012] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:37.976 ************************************ 00:29:37.976 END TEST raid_superblock_test 00:29:37.976 ************************************ 00:29:37.976 13:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:37.976 00:29:37.976 real 0m3.558s 00:29:37.976 user 0m5.531s 00:29:37.976 sys 0m0.809s 00:29:37.976 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.976 13:58:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.234 13:58:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:29:38.234 13:58:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:38.234 13:58:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.234 13:58:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:38.234 ************************************ 00:29:38.234 START TEST raid_read_error_test 00:29:38.234 ************************************ 00:29:38.234 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zn8NSRWGH5 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73080 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73080 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73080 ']' 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:38.235 13:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.235 [2024-10-09 13:58:44.689221] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:38.235 [2024-10-09 13:58:44.689421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73080 ] 00:29:38.493 [2024-10-09 13:58:44.869234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.493 [2024-10-09 13:58:44.912819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.493 [2024-10-09 13:58:44.956347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:38.493 [2024-10-09 13:58:44.956381] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.060 BaseBdev1_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.060 true 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.060 [2024-10-09 13:58:45.564363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:39.060 [2024-10-09 13:58:45.564417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.060 [2024-10-09 13:58:45.564452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:39.060 [2024-10-09 13:58:45.564465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.060 [2024-10-09 13:58:45.566978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.060 [2024-10-09 13:58:45.567155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:39.060 BaseBdev1 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.060 BaseBdev2_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.060 true 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.060 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.318 [2024-10-09 13:58:45.615363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:39.318 [2024-10-09 13:58:45.615523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.318 [2024-10-09 13:58:45.615629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:39.318 [2024-10-09 13:58:45.615703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.318 [2024-10-09 13:58:45.618161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.318 [2024-10-09 13:58:45.618294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:39.318 BaseBdev2 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.318 [2024-10-09 13:58:45.627409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:39.318 [2024-10-09 13:58:45.629631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:39.318 [2024-10-09 13:58:45.629932] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:39.318 [2024-10-09 13:58:45.629953] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:39.318 [2024-10-09 13:58:45.630224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:39.318 [2024-10-09 13:58:45.630354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:39.318 [2024-10-09 13:58:45.630369] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:29:39.318 [2024-10-09 13:58:45.630490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.318 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.318 "name": "raid_bdev1", 00:29:39.318 "uuid": "0ea80c8a-e2eb-4c0e-9c74-8f5d0cfc680a", 00:29:39.318 "strip_size_kb": 64, 00:29:39.318 "state": "online", 00:29:39.318 "raid_level": "raid0", 00:29:39.318 "superblock": true, 00:29:39.318 "num_base_bdevs": 2, 00:29:39.318 "num_base_bdevs_discovered": 2, 00:29:39.318 "num_base_bdevs_operational": 2, 00:29:39.318 "base_bdevs_list": [ 00:29:39.318 { 00:29:39.318 "name": "BaseBdev1", 00:29:39.319 "uuid": "3d290c32-06cc-576e-89f6-f352432f9186", 00:29:39.319 "is_configured": true, 00:29:39.319 "data_offset": 2048, 00:29:39.319 "data_size": 63488 00:29:39.319 }, 00:29:39.319 { 00:29:39.319 "name": "BaseBdev2", 00:29:39.319 "uuid": "59cf4c41-fcda-59b1-9987-1502e9e0e570", 00:29:39.319 "is_configured": true, 00:29:39.319 "data_offset": 2048, 00:29:39.319 "data_size": 63488 00:29:39.319 } 00:29:39.319 ] 00:29:39.319 }' 00:29:39.319 13:58:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.319 13:58:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.653 13:58:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:39.653 13:58:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:39.653 [2024-10-09 13:58:46.171920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.588 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.847 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:40.847 "name": "raid_bdev1", 00:29:40.847 "uuid": "0ea80c8a-e2eb-4c0e-9c74-8f5d0cfc680a", 00:29:40.847 "strip_size_kb": 64, 00:29:40.847 "state": "online", 00:29:40.847 "raid_level": "raid0", 00:29:40.847 "superblock": true, 00:29:40.847 "num_base_bdevs": 2, 00:29:40.847 "num_base_bdevs_discovered": 2, 00:29:40.847 "num_base_bdevs_operational": 2, 00:29:40.847 "base_bdevs_list": [ 00:29:40.847 { 00:29:40.847 "name": "BaseBdev1", 00:29:40.847 "uuid": "3d290c32-06cc-576e-89f6-f352432f9186", 00:29:40.847 "is_configured": true, 00:29:40.847 "data_offset": 2048, 00:29:40.847 "data_size": 63488 00:29:40.847 }, 00:29:40.847 { 00:29:40.847 "name": "BaseBdev2", 00:29:40.847 "uuid": "59cf4c41-fcda-59b1-9987-1502e9e0e570", 00:29:40.847 "is_configured": true, 00:29:40.847 "data_offset": 2048, 00:29:40.847 "data_size": 63488 00:29:40.847 } 00:29:40.847 ] 00:29:40.847 }' 00:29:40.847 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:40.847 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.105 [2024-10-09 13:58:47.555111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:41.105 [2024-10-09 13:58:47.555279] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:41.105 [2024-10-09 13:58:47.558008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.105 [2024-10-09 13:58:47.558046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.105 [2024-10-09 13:58:47.558082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.105 [2024-10-09 13:58:47.558094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:29:41.105 { 00:29:41.105 "results": [ 00:29:41.105 { 00:29:41.105 "job": "raid_bdev1", 00:29:41.105 "core_mask": "0x1", 00:29:41.105 "workload": "randrw", 00:29:41.105 "percentage": 50, 00:29:41.105 "status": "finished", 00:29:41.105 "queue_depth": 1, 00:29:41.105 "io_size": 131072, 00:29:41.105 "runtime": 1.381054, 00:29:41.105 "iops": 17059.43431610929, 00:29:41.105 "mibps": 2132.4292895136614, 00:29:41.105 "io_failed": 1, 00:29:41.105 "io_timeout": 0, 00:29:41.105 "avg_latency_us": 80.71496876395818, 00:29:41.105 "min_latency_us": 26.453333333333333, 00:29:41.105 "max_latency_us": 1458.9561904761904 00:29:41.105 } 00:29:41.105 ], 00:29:41.105 "core_count": 1 00:29:41.105 } 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73080 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73080 ']' 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73080 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73080 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73080' 00:29:41.105 killing process with pid 73080 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73080 00:29:41.105 [2024-10-09 13:58:47.610851] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:41.105 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73080 00:29:41.105 [2024-10-09 13:58:47.626372] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zn8NSRWGH5 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:29:41.365 00:29:41.365 real 0m3.314s 00:29:41.365 user 0m4.159s 00:29:41.365 sys 0m0.599s 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.365 ************************************ 00:29:41.365 END TEST raid_read_error_test 00:29:41.365 ************************************ 00:29:41.365 13:58:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.624 13:58:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:29:41.624 13:58:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:41.624 13:58:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.624 13:58:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:41.624 ************************************ 00:29:41.624 START TEST raid_write_error_test 00:29:41.624 ************************************ 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vgUKqEVd9Z 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73215 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73215 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73215 ']' 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:41.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:41.624 13:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.624 [2024-10-09 13:58:48.055173] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:41.624 [2024-10-09 13:58:48.055370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:29:41.882 [2024-10-09 13:58:48.213597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.882 [2024-10-09 13:58:48.258755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.882 [2024-10-09 13:58:48.302242] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:41.882 [2024-10-09 13:58:48.302282] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.450 BaseBdev1_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.450 true 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.450 [2024-10-09 13:58:48.962264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:42.450 [2024-10-09 13:58:48.962320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.450 [2024-10-09 13:58:48.962352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:42.450 [2024-10-09 13:58:48.962364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.450 [2024-10-09 13:58:48.964916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.450 [2024-10-09 13:58:48.964956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:42.450 BaseBdev1 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.450 BaseBdev2_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.450 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.709 true 00:29:42.709 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.709 13:58:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:42.709 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.709 13:58:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.709 [2024-10-09 13:58:49.006461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:42.709 [2024-10-09 13:58:49.006641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.709 [2024-10-09 13:58:49.006671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:42.709 [2024-10-09 13:58:49.006683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.709 [2024-10-09 13:58:49.009203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.709 [2024-10-09 13:58:49.009241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:42.709 BaseBdev2 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.709 [2024-10-09 13:58:49.014494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:42.709 [2024-10-09 13:58:49.016799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:42.709 [2024-10-09 13:58:49.016971] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:42.709 [2024-10-09 13:58:49.016985] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:42.709 [2024-10-09 13:58:49.017261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:42.709 [2024-10-09 13:58:49.017390] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:42.709 [2024-10-09 13:58:49.017405] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:29:42.709 [2024-10-09 13:58:49.017525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.709 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:42.709 "name": "raid_bdev1", 00:29:42.709 "uuid": "80831cf9-c407-43be-a8e6-66c0ec317e54", 00:29:42.709 "strip_size_kb": 64, 00:29:42.709 "state": "online", 00:29:42.709 "raid_level": "raid0", 00:29:42.709 "superblock": true, 00:29:42.710 "num_base_bdevs": 2, 00:29:42.710 "num_base_bdevs_discovered": 2, 00:29:42.710 "num_base_bdevs_operational": 2, 00:29:42.710 "base_bdevs_list": [ 00:29:42.710 { 00:29:42.710 "name": "BaseBdev1", 00:29:42.710 "uuid": "9a33ee84-d7f5-516b-81ec-1d490a7f5602", 00:29:42.710 "is_configured": true, 00:29:42.710 "data_offset": 2048, 00:29:42.710 "data_size": 63488 00:29:42.710 }, 00:29:42.710 { 00:29:42.710 "name": "BaseBdev2", 00:29:42.710 "uuid": "995b9cac-0e04-5257-b58e-26d45114ae3b", 00:29:42.710 "is_configured": true, 00:29:42.710 "data_offset": 2048, 00:29:42.710 "data_size": 63488 00:29:42.710 } 00:29:42.710 ] 00:29:42.710 }' 00:29:42.710 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:42.710 13:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.968 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:42.968 13:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:43.228 [2024-10-09 13:58:49.571004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:44.165 "name": "raid_bdev1", 00:29:44.165 "uuid": "80831cf9-c407-43be-a8e6-66c0ec317e54", 00:29:44.165 "strip_size_kb": 64, 00:29:44.165 "state": "online", 00:29:44.165 "raid_level": "raid0", 00:29:44.165 "superblock": true, 00:29:44.165 "num_base_bdevs": 2, 00:29:44.165 "num_base_bdevs_discovered": 2, 00:29:44.165 "num_base_bdevs_operational": 2, 00:29:44.165 "base_bdevs_list": [ 00:29:44.165 { 00:29:44.165 "name": "BaseBdev1", 00:29:44.165 "uuid": "9a33ee84-d7f5-516b-81ec-1d490a7f5602", 00:29:44.165 "is_configured": true, 00:29:44.165 "data_offset": 2048, 00:29:44.165 "data_size": 63488 00:29:44.165 }, 00:29:44.165 { 00:29:44.165 "name": "BaseBdev2", 00:29:44.165 "uuid": "995b9cac-0e04-5257-b58e-26d45114ae3b", 00:29:44.165 "is_configured": true, 00:29:44.165 "data_offset": 2048, 00:29:44.165 "data_size": 63488 00:29:44.165 } 00:29:44.165 ] 00:29:44.165 }' 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:44.165 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.424 [2024-10-09 13:58:50.929691] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:44.424 [2024-10-09 13:58:50.929729] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:44.424 [2024-10-09 13:58:50.932731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:44.424 [2024-10-09 13:58:50.932909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:44.424 [2024-10-09 13:58:50.932987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:44.424 [2024-10-09 13:58:50.933236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:29:44.424 { 00:29:44.424 "results": [ 00:29:44.424 { 00:29:44.424 "job": "raid_bdev1", 00:29:44.424 "core_mask": "0x1", 00:29:44.424 "workload": "randrw", 00:29:44.424 "percentage": 50, 00:29:44.424 "status": "finished", 00:29:44.424 "queue_depth": 1, 00:29:44.424 "io_size": 131072, 00:29:44.424 "runtime": 1.356313, 00:29:44.424 "iops": 16900.228781999434, 00:29:44.424 "mibps": 2112.5285977499293, 00:29:44.424 "io_failed": 1, 00:29:44.424 "io_timeout": 0, 00:29:44.424 "avg_latency_us": 81.52904909396469, 00:29:44.424 "min_latency_us": 26.575238095238095, 00:29:44.424 "max_latency_us": 1451.1542857142856 00:29:44.424 } 00:29:44.424 ], 00:29:44.424 "core_count": 1 00:29:44.424 } 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73215 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73215 ']' 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73215 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.424 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73215 00:29:44.683 killing process with pid 73215 00:29:44.683 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:44.683 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:44.683 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73215' 00:29:44.683 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73215 00:29:44.683 [2024-10-09 13:58:50.981541] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:44.683 13:58:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73215 00:29:44.683 [2024-10-09 13:58:50.997595] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vgUKqEVd9Z 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:29:44.943 00:29:44.943 real 0m3.325s 00:29:44.943 user 0m4.233s 00:29:44.943 sys 0m0.597s 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.943 ************************************ 00:29:44.943 END TEST raid_write_error_test 00:29:44.943 ************************************ 00:29:44.943 13:58:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.943 13:58:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:44.943 13:58:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:29:44.943 13:58:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:44.943 13:58:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:44.943 13:58:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:44.943 ************************************ 00:29:44.943 START TEST raid_state_function_test 00:29:44.943 ************************************ 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:44.943 Process raid pid: 73347 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73347 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73347' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73347 00:29:44.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73347 ']' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:44.943 13:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.943 [2024-10-09 13:58:51.438031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:44.943 [2024-10-09 13:58:51.438286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.202 [2024-10-09 13:58:51.617815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.202 [2024-10-09 13:58:51.668462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.202 [2024-10-09 13:58:51.712654] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.202 [2024-10-09 13:58:51.712690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.771 [2024-10-09 13:58:52.311786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:45.771 [2024-10-09 13:58:52.311841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:45.771 [2024-10-09 13:58:52.311862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:45.771 [2024-10-09 13:58:52.311876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:45.771 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.030 "name": "Existed_Raid", 00:29:46.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.030 "strip_size_kb": 64, 00:29:46.030 "state": "configuring", 00:29:46.030 "raid_level": "concat", 00:29:46.030 "superblock": false, 00:29:46.030 "num_base_bdevs": 2, 00:29:46.030 "num_base_bdevs_discovered": 0, 00:29:46.030 "num_base_bdevs_operational": 2, 00:29:46.030 "base_bdevs_list": [ 00:29:46.030 { 00:29:46.030 "name": "BaseBdev1", 00:29:46.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.030 "is_configured": false, 00:29:46.030 "data_offset": 0, 00:29:46.030 "data_size": 0 00:29:46.030 }, 00:29:46.030 { 00:29:46.030 "name": "BaseBdev2", 00:29:46.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.030 "is_configured": false, 00:29:46.030 "data_offset": 0, 00:29:46.030 "data_size": 0 00:29:46.030 } 00:29:46.030 ] 00:29:46.030 }' 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.030 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 [2024-10-09 13:58:52.739784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:46.289 [2024-10-09 13:58:52.739830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 [2024-10-09 13:58:52.747817] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:46.289 [2024-10-09 13:58:52.747970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:46.289 [2024-10-09 13:58:52.748055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:46.289 [2024-10-09 13:58:52.748100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 [2024-10-09 13:58:52.765142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.289 BaseBdev1 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.289 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.289 [ 00:29:46.289 { 00:29:46.289 "name": "BaseBdev1", 00:29:46.289 "aliases": [ 00:29:46.289 "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b" 00:29:46.289 ], 00:29:46.289 "product_name": "Malloc disk", 00:29:46.289 "block_size": 512, 00:29:46.289 "num_blocks": 65536, 00:29:46.289 "uuid": "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b", 00:29:46.289 "assigned_rate_limits": { 00:29:46.289 "rw_ios_per_sec": 0, 00:29:46.289 "rw_mbytes_per_sec": 0, 00:29:46.289 "r_mbytes_per_sec": 0, 00:29:46.289 "w_mbytes_per_sec": 0 00:29:46.289 }, 00:29:46.289 "claimed": true, 00:29:46.289 "claim_type": "exclusive_write", 00:29:46.289 "zoned": false, 00:29:46.289 "supported_io_types": { 00:29:46.289 "read": true, 00:29:46.289 "write": true, 00:29:46.289 "unmap": true, 00:29:46.289 "flush": true, 00:29:46.289 "reset": true, 00:29:46.289 "nvme_admin": false, 00:29:46.289 "nvme_io": false, 00:29:46.289 "nvme_io_md": false, 00:29:46.289 "write_zeroes": true, 00:29:46.289 "zcopy": true, 00:29:46.290 "get_zone_info": false, 00:29:46.290 "zone_management": false, 00:29:46.290 "zone_append": false, 00:29:46.290 "compare": false, 00:29:46.290 "compare_and_write": false, 00:29:46.290 "abort": true, 00:29:46.290 "seek_hole": false, 00:29:46.290 "seek_data": false, 00:29:46.290 "copy": true, 00:29:46.290 "nvme_iov_md": false 00:29:46.290 }, 00:29:46.290 "memory_domains": [ 00:29:46.290 { 00:29:46.290 "dma_device_id": "system", 00:29:46.290 "dma_device_type": 1 00:29:46.290 }, 00:29:46.290 { 00:29:46.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:46.290 "dma_device_type": 2 00:29:46.290 } 00:29:46.290 ], 00:29:46.290 "driver_specific": {} 00:29:46.290 } 00:29:46.290 ] 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:46.290 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.549 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.549 "name": "Existed_Raid", 00:29:46.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.549 "strip_size_kb": 64, 00:29:46.549 "state": "configuring", 00:29:46.549 "raid_level": "concat", 00:29:46.549 "superblock": false, 00:29:46.549 "num_base_bdevs": 2, 00:29:46.549 "num_base_bdevs_discovered": 1, 00:29:46.549 "num_base_bdevs_operational": 2, 00:29:46.549 "base_bdevs_list": [ 00:29:46.549 { 00:29:46.549 "name": "BaseBdev1", 00:29:46.549 "uuid": "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b", 00:29:46.549 "is_configured": true, 00:29:46.549 "data_offset": 0, 00:29:46.549 "data_size": 65536 00:29:46.549 }, 00:29:46.549 { 00:29:46.549 "name": "BaseBdev2", 00:29:46.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.549 "is_configured": false, 00:29:46.549 "data_offset": 0, 00:29:46.549 "data_size": 0 00:29:46.549 } 00:29:46.549 ] 00:29:46.549 }' 00:29:46.549 13:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.549 13:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.807 [2024-10-09 13:58:53.257316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:46.807 [2024-10-09 13:58:53.257369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.807 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.807 [2024-10-09 13:58:53.269349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:46.807 [2024-10-09 13:58:53.271676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:46.808 [2024-10-09 13:58:53.271819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:46.808 "name": "Existed_Raid", 00:29:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.808 "strip_size_kb": 64, 00:29:46.808 "state": "configuring", 00:29:46.808 "raid_level": "concat", 00:29:46.808 "superblock": false, 00:29:46.808 "num_base_bdevs": 2, 00:29:46.808 "num_base_bdevs_discovered": 1, 00:29:46.808 "num_base_bdevs_operational": 2, 00:29:46.808 "base_bdevs_list": [ 00:29:46.808 { 00:29:46.808 "name": "BaseBdev1", 00:29:46.808 "uuid": "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b", 00:29:46.808 "is_configured": true, 00:29:46.808 "data_offset": 0, 00:29:46.808 "data_size": 65536 00:29:46.808 }, 00:29:46.808 { 00:29:46.808 "name": "BaseBdev2", 00:29:46.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.808 "is_configured": false, 00:29:46.808 "data_offset": 0, 00:29:46.808 "data_size": 0 00:29:46.808 } 00:29:46.808 ] 00:29:46.808 }' 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:46.808 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.375 [2024-10-09 13:58:53.752089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:47.375 [2024-10-09 13:58:53.752149] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:47.375 [2024-10-09 13:58:53.752170] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:29:47.375 [2024-10-09 13:58:53.752519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:47.375 [2024-10-09 13:58:53.752690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:47.375 [2024-10-09 13:58:53.752710] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:29:47.375 [2024-10-09 13:58:53.752948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.375 BaseBdev2 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.375 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.376 [ 00:29:47.376 { 00:29:47.376 "name": "BaseBdev2", 00:29:47.376 "aliases": [ 00:29:47.376 "dde58eed-828c-4b7e-9c9f-446b56cf9cf6" 00:29:47.376 ], 00:29:47.376 "product_name": "Malloc disk", 00:29:47.376 "block_size": 512, 00:29:47.376 "num_blocks": 65536, 00:29:47.376 "uuid": "dde58eed-828c-4b7e-9c9f-446b56cf9cf6", 00:29:47.376 "assigned_rate_limits": { 00:29:47.376 "rw_ios_per_sec": 0, 00:29:47.376 "rw_mbytes_per_sec": 0, 00:29:47.376 "r_mbytes_per_sec": 0, 00:29:47.376 "w_mbytes_per_sec": 0 00:29:47.376 }, 00:29:47.376 "claimed": true, 00:29:47.376 "claim_type": "exclusive_write", 00:29:47.376 "zoned": false, 00:29:47.376 "supported_io_types": { 00:29:47.376 "read": true, 00:29:47.376 "write": true, 00:29:47.376 "unmap": true, 00:29:47.376 "flush": true, 00:29:47.376 "reset": true, 00:29:47.376 "nvme_admin": false, 00:29:47.376 "nvme_io": false, 00:29:47.376 "nvme_io_md": false, 00:29:47.376 "write_zeroes": true, 00:29:47.376 "zcopy": true, 00:29:47.376 "get_zone_info": false, 00:29:47.376 "zone_management": false, 00:29:47.376 "zone_append": false, 00:29:47.376 "compare": false, 00:29:47.376 "compare_and_write": false, 00:29:47.376 "abort": true, 00:29:47.376 "seek_hole": false, 00:29:47.376 "seek_data": false, 00:29:47.376 "copy": true, 00:29:47.376 "nvme_iov_md": false 00:29:47.376 }, 00:29:47.376 "memory_domains": [ 00:29:47.376 { 00:29:47.376 "dma_device_id": "system", 00:29:47.376 "dma_device_type": 1 00:29:47.376 }, 00:29:47.376 { 00:29:47.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:47.376 "dma_device_type": 2 00:29:47.376 } 00:29:47.376 ], 00:29:47.376 "driver_specific": {} 00:29:47.376 } 00:29:47.376 ] 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:47.376 "name": "Existed_Raid", 00:29:47.376 "uuid": "b359e786-be65-4791-bacd-0b793468f882", 00:29:47.376 "strip_size_kb": 64, 00:29:47.376 "state": "online", 00:29:47.376 "raid_level": "concat", 00:29:47.376 "superblock": false, 00:29:47.376 "num_base_bdevs": 2, 00:29:47.376 "num_base_bdevs_discovered": 2, 00:29:47.376 "num_base_bdevs_operational": 2, 00:29:47.376 "base_bdevs_list": [ 00:29:47.376 { 00:29:47.376 "name": "BaseBdev1", 00:29:47.376 "uuid": "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b", 00:29:47.376 "is_configured": true, 00:29:47.376 "data_offset": 0, 00:29:47.376 "data_size": 65536 00:29:47.376 }, 00:29:47.376 { 00:29:47.376 "name": "BaseBdev2", 00:29:47.376 "uuid": "dde58eed-828c-4b7e-9c9f-446b56cf9cf6", 00:29:47.376 "is_configured": true, 00:29:47.376 "data_offset": 0, 00:29:47.376 "data_size": 65536 00:29:47.376 } 00:29:47.376 ] 00:29:47.376 }' 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.376 13:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 [2024-10-09 13:58:54.232533] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.944 "name": "Existed_Raid", 00:29:47.944 "aliases": [ 00:29:47.944 "b359e786-be65-4791-bacd-0b793468f882" 00:29:47.944 ], 00:29:47.944 "product_name": "Raid Volume", 00:29:47.944 "block_size": 512, 00:29:47.944 "num_blocks": 131072, 00:29:47.944 "uuid": "b359e786-be65-4791-bacd-0b793468f882", 00:29:47.944 "assigned_rate_limits": { 00:29:47.944 "rw_ios_per_sec": 0, 00:29:47.944 "rw_mbytes_per_sec": 0, 00:29:47.944 "r_mbytes_per_sec": 0, 00:29:47.944 "w_mbytes_per_sec": 0 00:29:47.944 }, 00:29:47.944 "claimed": false, 00:29:47.944 "zoned": false, 00:29:47.944 "supported_io_types": { 00:29:47.944 "read": true, 00:29:47.944 "write": true, 00:29:47.944 "unmap": true, 00:29:47.944 "flush": true, 00:29:47.944 "reset": true, 00:29:47.944 "nvme_admin": false, 00:29:47.944 "nvme_io": false, 00:29:47.944 "nvme_io_md": false, 00:29:47.944 "write_zeroes": true, 00:29:47.944 "zcopy": false, 00:29:47.944 "get_zone_info": false, 00:29:47.944 "zone_management": false, 00:29:47.944 "zone_append": false, 00:29:47.944 "compare": false, 00:29:47.944 "compare_and_write": false, 00:29:47.944 "abort": false, 00:29:47.944 "seek_hole": false, 00:29:47.944 "seek_data": false, 00:29:47.944 "copy": false, 00:29:47.944 "nvme_iov_md": false 00:29:47.944 }, 00:29:47.944 "memory_domains": [ 00:29:47.944 { 00:29:47.944 "dma_device_id": "system", 00:29:47.944 "dma_device_type": 1 00:29:47.944 }, 00:29:47.944 { 00:29:47.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:47.944 "dma_device_type": 2 00:29:47.944 }, 00:29:47.944 { 00:29:47.944 "dma_device_id": "system", 00:29:47.944 "dma_device_type": 1 00:29:47.944 }, 00:29:47.944 { 00:29:47.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:47.944 "dma_device_type": 2 00:29:47.944 } 00:29:47.944 ], 00:29:47.944 "driver_specific": { 00:29:47.944 "raid": { 00:29:47.944 "uuid": "b359e786-be65-4791-bacd-0b793468f882", 00:29:47.944 "strip_size_kb": 64, 00:29:47.944 "state": "online", 00:29:47.944 "raid_level": "concat", 00:29:47.944 "superblock": false, 00:29:47.944 "num_base_bdevs": 2, 00:29:47.944 "num_base_bdevs_discovered": 2, 00:29:47.944 "num_base_bdevs_operational": 2, 00:29:47.944 "base_bdevs_list": [ 00:29:47.944 { 00:29:47.944 "name": "BaseBdev1", 00:29:47.944 "uuid": "e4e3e74a-6bf0-4686-a4d8-d5d4ecf6731b", 00:29:47.944 "is_configured": true, 00:29:47.944 "data_offset": 0, 00:29:47.944 "data_size": 65536 00:29:47.944 }, 00:29:47.944 { 00:29:47.944 "name": "BaseBdev2", 00:29:47.944 "uuid": "dde58eed-828c-4b7e-9c9f-446b56cf9cf6", 00:29:47.944 "is_configured": true, 00:29:47.944 "data_offset": 0, 00:29:47.944 "data_size": 65536 00:29:47.944 } 00:29:47.944 ] 00:29:47.944 } 00:29:47.944 } 00:29:47.944 }' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:47.944 BaseBdev2' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 [2024-10-09 13:58:54.456339] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:47.944 [2024-10-09 13:58:54.456381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:47.944 [2024-10-09 13:58:54.456454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:48.203 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.203 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:48.203 "name": "Existed_Raid", 00:29:48.203 "uuid": "b359e786-be65-4791-bacd-0b793468f882", 00:29:48.203 "strip_size_kb": 64, 00:29:48.203 "state": "offline", 00:29:48.203 "raid_level": "concat", 00:29:48.203 "superblock": false, 00:29:48.203 "num_base_bdevs": 2, 00:29:48.203 "num_base_bdevs_discovered": 1, 00:29:48.203 "num_base_bdevs_operational": 1, 00:29:48.203 "base_bdevs_list": [ 00:29:48.203 { 00:29:48.203 "name": null, 00:29:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.203 "is_configured": false, 00:29:48.203 "data_offset": 0, 00:29:48.203 "data_size": 65536 00:29:48.203 }, 00:29:48.203 { 00:29:48.203 "name": "BaseBdev2", 00:29:48.203 "uuid": "dde58eed-828c-4b7e-9c9f-446b56cf9cf6", 00:29:48.203 "is_configured": true, 00:29:48.203 "data_offset": 0, 00:29:48.203 "data_size": 65536 00:29:48.203 } 00:29:48.203 ] 00:29:48.203 }' 00:29:48.203 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:48.203 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.462 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.462 [2024-10-09 13:58:54.956666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:48.462 [2024-10-09 13:58:54.956727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:48.463 13:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73347 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73347 ']' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73347 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73347 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:48.721 killing process with pid 73347 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73347' 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73347 00:29:48.721 [2024-10-09 13:58:55.063875] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:48.721 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73347 00:29:48.721 [2024-10-09 13:58:55.064958] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:48.981 00:29:48.981 real 0m3.992s 00:29:48.981 user 0m6.266s 00:29:48.981 sys 0m0.873s 00:29:48.981 ************************************ 00:29:48.981 END TEST raid_state_function_test 00:29:48.981 ************************************ 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.981 13:58:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:29:48.981 13:58:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:48.981 13:58:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.981 13:58:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:48.981 ************************************ 00:29:48.981 START TEST raid_state_function_test_sb 00:29:48.981 ************************************ 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73584 00:29:48.981 Process raid pid: 73584 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73584' 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73584 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73584 ']' 00:29:48.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.981 13:58:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:48.981 [2024-10-09 13:58:55.491981] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:48.981 [2024-10-09 13:58:55.492167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.240 [2024-10-09 13:58:55.672910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.240 [2024-10-09 13:58:55.717283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.240 [2024-10-09 13:58:55.760952] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:49.240 [2024-10-09 13:58:55.760990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:50.176 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.176 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:50.176 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:50.176 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.177 [2024-10-09 13:58:56.463827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:50.177 [2024-10-09 13:58:56.463880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:50.177 [2024-10-09 13:58:56.463895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:50.177 [2024-10-09 13:58:56.463909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:50.177 "name": "Existed_Raid", 00:29:50.177 "uuid": "a8c5dfad-2e25-47f7-b73f-1fcc8ddab0a6", 00:29:50.177 "strip_size_kb": 64, 00:29:50.177 "state": "configuring", 00:29:50.177 "raid_level": "concat", 00:29:50.177 "superblock": true, 00:29:50.177 "num_base_bdevs": 2, 00:29:50.177 "num_base_bdevs_discovered": 0, 00:29:50.177 "num_base_bdevs_operational": 2, 00:29:50.177 "base_bdevs_list": [ 00:29:50.177 { 00:29:50.177 "name": "BaseBdev1", 00:29:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.177 "is_configured": false, 00:29:50.177 "data_offset": 0, 00:29:50.177 "data_size": 0 00:29:50.177 }, 00:29:50.177 { 00:29:50.177 "name": "BaseBdev2", 00:29:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.177 "is_configured": false, 00:29:50.177 "data_offset": 0, 00:29:50.177 "data_size": 0 00:29:50.177 } 00:29:50.177 ] 00:29:50.177 }' 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:50.177 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 [2024-10-09 13:58:56.851822] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:50.436 [2024-10-09 13:58:56.851875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 [2024-10-09 13:58:56.859866] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:50.436 [2024-10-09 13:58:56.859910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:50.436 [2024-10-09 13:58:56.859920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:50.436 [2024-10-09 13:58:56.859932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 [2024-10-09 13:58:56.877225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:50.436 BaseBdev1 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.436 [ 00:29:50.436 { 00:29:50.436 "name": "BaseBdev1", 00:29:50.436 "aliases": [ 00:29:50.436 "ec3c9a22-6461-4d06-8f24-01840f3c70c3" 00:29:50.436 ], 00:29:50.436 "product_name": "Malloc disk", 00:29:50.436 "block_size": 512, 00:29:50.436 "num_blocks": 65536, 00:29:50.436 "uuid": "ec3c9a22-6461-4d06-8f24-01840f3c70c3", 00:29:50.436 "assigned_rate_limits": { 00:29:50.436 "rw_ios_per_sec": 0, 00:29:50.436 "rw_mbytes_per_sec": 0, 00:29:50.436 "r_mbytes_per_sec": 0, 00:29:50.436 "w_mbytes_per_sec": 0 00:29:50.436 }, 00:29:50.436 "claimed": true, 00:29:50.436 "claim_type": "exclusive_write", 00:29:50.436 "zoned": false, 00:29:50.436 "supported_io_types": { 00:29:50.436 "read": true, 00:29:50.436 "write": true, 00:29:50.436 "unmap": true, 00:29:50.436 "flush": true, 00:29:50.436 "reset": true, 00:29:50.436 "nvme_admin": false, 00:29:50.436 "nvme_io": false, 00:29:50.436 "nvme_io_md": false, 00:29:50.436 "write_zeroes": true, 00:29:50.436 "zcopy": true, 00:29:50.436 "get_zone_info": false, 00:29:50.436 "zone_management": false, 00:29:50.436 "zone_append": false, 00:29:50.436 "compare": false, 00:29:50.436 "compare_and_write": false, 00:29:50.436 "abort": true, 00:29:50.436 "seek_hole": false, 00:29:50.436 "seek_data": false, 00:29:50.436 "copy": true, 00:29:50.436 "nvme_iov_md": false 00:29:50.436 }, 00:29:50.436 "memory_domains": [ 00:29:50.436 { 00:29:50.436 "dma_device_id": "system", 00:29:50.436 "dma_device_type": 1 00:29:50.436 }, 00:29:50.436 { 00:29:50.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:50.436 "dma_device_type": 2 00:29:50.436 } 00:29:50.436 ], 00:29:50.436 "driver_specific": {} 00:29:50.436 } 00:29:50.436 ] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:50.436 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:50.437 "name": "Existed_Raid", 00:29:50.437 "uuid": "060a34f3-b97e-4ece-89f6-3ed3eba63dd3", 00:29:50.437 "strip_size_kb": 64, 00:29:50.437 "state": "configuring", 00:29:50.437 "raid_level": "concat", 00:29:50.437 "superblock": true, 00:29:50.437 "num_base_bdevs": 2, 00:29:50.437 "num_base_bdevs_discovered": 1, 00:29:50.437 "num_base_bdevs_operational": 2, 00:29:50.437 "base_bdevs_list": [ 00:29:50.437 { 00:29:50.437 "name": "BaseBdev1", 00:29:50.437 "uuid": "ec3c9a22-6461-4d06-8f24-01840f3c70c3", 00:29:50.437 "is_configured": true, 00:29:50.437 "data_offset": 2048, 00:29:50.437 "data_size": 63488 00:29:50.437 }, 00:29:50.437 { 00:29:50.437 "name": "BaseBdev2", 00:29:50.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.437 "is_configured": false, 00:29:50.437 "data_offset": 0, 00:29:50.437 "data_size": 0 00:29:50.437 } 00:29:50.437 ] 00:29:50.437 }' 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:50.437 13:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.004 [2024-10-09 13:58:57.349351] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:51.004 [2024-10-09 13:58:57.349402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.004 [2024-10-09 13:58:57.357403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:51.004 [2024-10-09 13:58:57.359663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:51.004 [2024-10-09 13:58:57.359705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:51.004 "name": "Existed_Raid", 00:29:51.004 "uuid": "228dfd84-28ff-435b-8539-8ca07d687ec5", 00:29:51.004 "strip_size_kb": 64, 00:29:51.004 "state": "configuring", 00:29:51.004 "raid_level": "concat", 00:29:51.004 "superblock": true, 00:29:51.004 "num_base_bdevs": 2, 00:29:51.004 "num_base_bdevs_discovered": 1, 00:29:51.004 "num_base_bdevs_operational": 2, 00:29:51.004 "base_bdevs_list": [ 00:29:51.004 { 00:29:51.004 "name": "BaseBdev1", 00:29:51.004 "uuid": "ec3c9a22-6461-4d06-8f24-01840f3c70c3", 00:29:51.004 "is_configured": true, 00:29:51.004 "data_offset": 2048, 00:29:51.004 "data_size": 63488 00:29:51.004 }, 00:29:51.004 { 00:29:51.004 "name": "BaseBdev2", 00:29:51.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.004 "is_configured": false, 00:29:51.004 "data_offset": 0, 00:29:51.004 "data_size": 0 00:29:51.004 } 00:29:51.004 ] 00:29:51.004 }' 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:51.004 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.572 [2024-10-09 13:58:57.839473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:51.572 [2024-10-09 13:58:57.839820] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:51.572 [2024-10-09 13:58:57.839858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:51.572 BaseBdev2 00:29:51.572 [2024-10-09 13:58:57.840279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:51.572 [2024-10-09 13:58:57.840472] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:51.572 [2024-10-09 13:58:57.840498] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:29:51.572 [2024-10-09 13:58:57.840701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.572 [ 00:29:51.572 { 00:29:51.572 "name": "BaseBdev2", 00:29:51.572 "aliases": [ 00:29:51.572 "3e5a74c2-5ff1-4388-92b7-a43bc40b5edc" 00:29:51.572 ], 00:29:51.572 "product_name": "Malloc disk", 00:29:51.572 "block_size": 512, 00:29:51.572 "num_blocks": 65536, 00:29:51.572 "uuid": "3e5a74c2-5ff1-4388-92b7-a43bc40b5edc", 00:29:51.572 "assigned_rate_limits": { 00:29:51.572 "rw_ios_per_sec": 0, 00:29:51.572 "rw_mbytes_per_sec": 0, 00:29:51.572 "r_mbytes_per_sec": 0, 00:29:51.572 "w_mbytes_per_sec": 0 00:29:51.572 }, 00:29:51.572 "claimed": true, 00:29:51.572 "claim_type": "exclusive_write", 00:29:51.572 "zoned": false, 00:29:51.572 "supported_io_types": { 00:29:51.572 "read": true, 00:29:51.572 "write": true, 00:29:51.572 "unmap": true, 00:29:51.572 "flush": true, 00:29:51.572 "reset": true, 00:29:51.572 "nvme_admin": false, 00:29:51.572 "nvme_io": false, 00:29:51.572 "nvme_io_md": false, 00:29:51.572 "write_zeroes": true, 00:29:51.572 "zcopy": true, 00:29:51.572 "get_zone_info": false, 00:29:51.572 "zone_management": false, 00:29:51.572 "zone_append": false, 00:29:51.572 "compare": false, 00:29:51.572 "compare_and_write": false, 00:29:51.572 "abort": true, 00:29:51.572 "seek_hole": false, 00:29:51.572 "seek_data": false, 00:29:51.572 "copy": true, 00:29:51.572 "nvme_iov_md": false 00:29:51.572 }, 00:29:51.572 "memory_domains": [ 00:29:51.572 { 00:29:51.572 "dma_device_id": "system", 00:29:51.572 "dma_device_type": 1 00:29:51.572 }, 00:29:51.572 { 00:29:51.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:51.572 "dma_device_type": 2 00:29:51.572 } 00:29:51.572 ], 00:29:51.572 "driver_specific": {} 00:29:51.572 } 00:29:51.572 ] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:51.572 "name": "Existed_Raid", 00:29:51.572 "uuid": "228dfd84-28ff-435b-8539-8ca07d687ec5", 00:29:51.572 "strip_size_kb": 64, 00:29:51.572 "state": "online", 00:29:51.572 "raid_level": "concat", 00:29:51.572 "superblock": true, 00:29:51.572 "num_base_bdevs": 2, 00:29:51.572 "num_base_bdevs_discovered": 2, 00:29:51.572 "num_base_bdevs_operational": 2, 00:29:51.572 "base_bdevs_list": [ 00:29:51.572 { 00:29:51.572 "name": "BaseBdev1", 00:29:51.572 "uuid": "ec3c9a22-6461-4d06-8f24-01840f3c70c3", 00:29:51.572 "is_configured": true, 00:29:51.572 "data_offset": 2048, 00:29:51.572 "data_size": 63488 00:29:51.572 }, 00:29:51.572 { 00:29:51.572 "name": "BaseBdev2", 00:29:51.572 "uuid": "3e5a74c2-5ff1-4388-92b7-a43bc40b5edc", 00:29:51.572 "is_configured": true, 00:29:51.572 "data_offset": 2048, 00:29:51.572 "data_size": 63488 00:29:51.572 } 00:29:51.572 ] 00:29:51.572 }' 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:51.572 13:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.832 [2024-10-09 13:58:58.331896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.832 "name": "Existed_Raid", 00:29:51.832 "aliases": [ 00:29:51.832 "228dfd84-28ff-435b-8539-8ca07d687ec5" 00:29:51.832 ], 00:29:51.832 "product_name": "Raid Volume", 00:29:51.832 "block_size": 512, 00:29:51.832 "num_blocks": 126976, 00:29:51.832 "uuid": "228dfd84-28ff-435b-8539-8ca07d687ec5", 00:29:51.832 "assigned_rate_limits": { 00:29:51.832 "rw_ios_per_sec": 0, 00:29:51.832 "rw_mbytes_per_sec": 0, 00:29:51.832 "r_mbytes_per_sec": 0, 00:29:51.832 "w_mbytes_per_sec": 0 00:29:51.832 }, 00:29:51.832 "claimed": false, 00:29:51.832 "zoned": false, 00:29:51.832 "supported_io_types": { 00:29:51.832 "read": true, 00:29:51.832 "write": true, 00:29:51.832 "unmap": true, 00:29:51.832 "flush": true, 00:29:51.832 "reset": true, 00:29:51.832 "nvme_admin": false, 00:29:51.832 "nvme_io": false, 00:29:51.832 "nvme_io_md": false, 00:29:51.832 "write_zeroes": true, 00:29:51.832 "zcopy": false, 00:29:51.832 "get_zone_info": false, 00:29:51.832 "zone_management": false, 00:29:51.832 "zone_append": false, 00:29:51.832 "compare": false, 00:29:51.832 "compare_and_write": false, 00:29:51.832 "abort": false, 00:29:51.832 "seek_hole": false, 00:29:51.832 "seek_data": false, 00:29:51.832 "copy": false, 00:29:51.832 "nvme_iov_md": false 00:29:51.832 }, 00:29:51.832 "memory_domains": [ 00:29:51.832 { 00:29:51.832 "dma_device_id": "system", 00:29:51.832 "dma_device_type": 1 00:29:51.832 }, 00:29:51.832 { 00:29:51.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:51.832 "dma_device_type": 2 00:29:51.832 }, 00:29:51.832 { 00:29:51.832 "dma_device_id": "system", 00:29:51.832 "dma_device_type": 1 00:29:51.832 }, 00:29:51.832 { 00:29:51.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:51.832 "dma_device_type": 2 00:29:51.832 } 00:29:51.832 ], 00:29:51.832 "driver_specific": { 00:29:51.832 "raid": { 00:29:51.832 "uuid": "228dfd84-28ff-435b-8539-8ca07d687ec5", 00:29:51.832 "strip_size_kb": 64, 00:29:51.832 "state": "online", 00:29:51.832 "raid_level": "concat", 00:29:51.832 "superblock": true, 00:29:51.832 "num_base_bdevs": 2, 00:29:51.832 "num_base_bdevs_discovered": 2, 00:29:51.832 "num_base_bdevs_operational": 2, 00:29:51.832 "base_bdevs_list": [ 00:29:51.832 { 00:29:51.832 "name": "BaseBdev1", 00:29:51.832 "uuid": "ec3c9a22-6461-4d06-8f24-01840f3c70c3", 00:29:51.832 "is_configured": true, 00:29:51.832 "data_offset": 2048, 00:29:51.832 "data_size": 63488 00:29:51.832 }, 00:29:51.832 { 00:29:51.832 "name": "BaseBdev2", 00:29:51.832 "uuid": "3e5a74c2-5ff1-4388-92b7-a43bc40b5edc", 00:29:51.832 "is_configured": true, 00:29:51.832 "data_offset": 2048, 00:29:51.832 "data_size": 63488 00:29:51.832 } 00:29:51.832 ] 00:29:51.832 } 00:29:51.832 } 00:29:51.832 }' 00:29:51.832 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:52.103 BaseBdev2' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.103 [2024-10-09 13:58:58.555701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:52.103 [2024-10-09 13:58:58.555737] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:52.103 [2024-10-09 13:58:58.555790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.103 "name": "Existed_Raid", 00:29:52.103 "uuid": "228dfd84-28ff-435b-8539-8ca07d687ec5", 00:29:52.103 "strip_size_kb": 64, 00:29:52.103 "state": "offline", 00:29:52.103 "raid_level": "concat", 00:29:52.103 "superblock": true, 00:29:52.103 "num_base_bdevs": 2, 00:29:52.103 "num_base_bdevs_discovered": 1, 00:29:52.103 "num_base_bdevs_operational": 1, 00:29:52.103 "base_bdevs_list": [ 00:29:52.103 { 00:29:52.103 "name": null, 00:29:52.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.103 "is_configured": false, 00:29:52.103 "data_offset": 0, 00:29:52.103 "data_size": 63488 00:29:52.103 }, 00:29:52.103 { 00:29:52.103 "name": "BaseBdev2", 00:29:52.103 "uuid": "3e5a74c2-5ff1-4388-92b7-a43bc40b5edc", 00:29:52.103 "is_configured": true, 00:29:52.103 "data_offset": 2048, 00:29:52.103 "data_size": 63488 00:29:52.103 } 00:29:52.103 ] 00:29:52.103 }' 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.103 13:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.687 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:52.687 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.688 [2024-10-09 13:58:59.059857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:52.688 [2024-10-09 13:58:59.059916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73584 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73584 ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73584 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73584 00:29:52.688 killing process with pid 73584 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73584' 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73584 00:29:52.688 [2024-10-09 13:58:59.163375] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:52.688 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73584 00:29:52.688 [2024-10-09 13:58:59.164448] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:52.947 13:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:52.947 00:29:52.947 real 0m4.039s 00:29:52.947 user 0m6.375s 00:29:52.947 sys 0m0.888s 00:29:52.947 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.947 13:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:52.947 ************************************ 00:29:52.947 END TEST raid_state_function_test_sb 00:29:52.947 ************************************ 00:29:52.947 13:58:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:29:52.947 13:58:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:52.947 13:58:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.947 13:58:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:52.947 ************************************ 00:29:52.947 START TEST raid_superblock_test 00:29:52.947 ************************************ 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73825 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73825 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73825 ']' 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.947 13:58:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.206 [2024-10-09 13:58:59.596381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:53.206 [2024-10-09 13:58:59.596581] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73825 ] 00:29:53.465 [2024-10-09 13:58:59.773745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.465 [2024-10-09 13:58:59.818008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.465 [2024-10-09 13:58:59.861323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:53.465 [2024-10-09 13:58:59.861364] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.035 malloc1 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.035 [2024-10-09 13:59:00.549387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:54.035 [2024-10-09 13:59:00.549473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.035 [2024-10-09 13:59:00.549499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:54.035 [2024-10-09 13:59:00.549526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.035 [2024-10-09 13:59:00.552053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.035 [2024-10-09 13:59:00.552103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:54.035 pt1 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:54.035 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:54.036 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:54.036 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:54.036 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:54.036 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.036 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.295 malloc2 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.295 [2024-10-09 13:59:00.593804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:54.295 [2024-10-09 13:59:00.593860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.295 [2024-10-09 13:59:00.593880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:54.295 [2024-10-09 13:59:00.593894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.295 [2024-10-09 13:59:00.596467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.295 [2024-10-09 13:59:00.596508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:54.295 pt2 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.295 [2024-10-09 13:59:00.601883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:54.295 [2024-10-09 13:59:00.604202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:54.295 [2024-10-09 13:59:00.604333] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:29:54.295 [2024-10-09 13:59:00.604356] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:54.295 [2024-10-09 13:59:00.604648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:54.295 [2024-10-09 13:59:00.604790] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:29:54.295 [2024-10-09 13:59:00.604801] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:29:54.295 [2024-10-09 13:59:00.604915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.295 "name": "raid_bdev1", 00:29:54.295 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:54.295 "strip_size_kb": 64, 00:29:54.295 "state": "online", 00:29:54.295 "raid_level": "concat", 00:29:54.295 "superblock": true, 00:29:54.295 "num_base_bdevs": 2, 00:29:54.295 "num_base_bdevs_discovered": 2, 00:29:54.295 "num_base_bdevs_operational": 2, 00:29:54.295 "base_bdevs_list": [ 00:29:54.295 { 00:29:54.295 "name": "pt1", 00:29:54.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:54.295 "is_configured": true, 00:29:54.295 "data_offset": 2048, 00:29:54.295 "data_size": 63488 00:29:54.295 }, 00:29:54.295 { 00:29:54.295 "name": "pt2", 00:29:54.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:54.295 "is_configured": true, 00:29:54.295 "data_offset": 2048, 00:29:54.295 "data_size": 63488 00:29:54.295 } 00:29:54.295 ] 00:29:54.295 }' 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.295 13:59:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.554 [2024-10-09 13:59:01.046245] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.554 "name": "raid_bdev1", 00:29:54.554 "aliases": [ 00:29:54.554 "fc329aa8-eab8-45f4-9a02-ea3776a1c044" 00:29:54.554 ], 00:29:54.554 "product_name": "Raid Volume", 00:29:54.554 "block_size": 512, 00:29:54.554 "num_blocks": 126976, 00:29:54.554 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:54.554 "assigned_rate_limits": { 00:29:54.554 "rw_ios_per_sec": 0, 00:29:54.554 "rw_mbytes_per_sec": 0, 00:29:54.554 "r_mbytes_per_sec": 0, 00:29:54.554 "w_mbytes_per_sec": 0 00:29:54.554 }, 00:29:54.554 "claimed": false, 00:29:54.554 "zoned": false, 00:29:54.554 "supported_io_types": { 00:29:54.554 "read": true, 00:29:54.554 "write": true, 00:29:54.554 "unmap": true, 00:29:54.554 "flush": true, 00:29:54.554 "reset": true, 00:29:54.554 "nvme_admin": false, 00:29:54.554 "nvme_io": false, 00:29:54.554 "nvme_io_md": false, 00:29:54.554 "write_zeroes": true, 00:29:54.554 "zcopy": false, 00:29:54.554 "get_zone_info": false, 00:29:54.554 "zone_management": false, 00:29:54.554 "zone_append": false, 00:29:54.554 "compare": false, 00:29:54.554 "compare_and_write": false, 00:29:54.554 "abort": false, 00:29:54.554 "seek_hole": false, 00:29:54.554 "seek_data": false, 00:29:54.554 "copy": false, 00:29:54.554 "nvme_iov_md": false 00:29:54.554 }, 00:29:54.554 "memory_domains": [ 00:29:54.554 { 00:29:54.554 "dma_device_id": "system", 00:29:54.554 "dma_device_type": 1 00:29:54.554 }, 00:29:54.554 { 00:29:54.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:54.554 "dma_device_type": 2 00:29:54.554 }, 00:29:54.554 { 00:29:54.554 "dma_device_id": "system", 00:29:54.554 "dma_device_type": 1 00:29:54.554 }, 00:29:54.554 { 00:29:54.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:54.554 "dma_device_type": 2 00:29:54.554 } 00:29:54.554 ], 00:29:54.554 "driver_specific": { 00:29:54.554 "raid": { 00:29:54.554 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:54.554 "strip_size_kb": 64, 00:29:54.554 "state": "online", 00:29:54.554 "raid_level": "concat", 00:29:54.554 "superblock": true, 00:29:54.554 "num_base_bdevs": 2, 00:29:54.554 "num_base_bdevs_discovered": 2, 00:29:54.554 "num_base_bdevs_operational": 2, 00:29:54.554 "base_bdevs_list": [ 00:29:54.554 { 00:29:54.554 "name": "pt1", 00:29:54.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:54.554 "is_configured": true, 00:29:54.554 "data_offset": 2048, 00:29:54.554 "data_size": 63488 00:29:54.554 }, 00:29:54.554 { 00:29:54.554 "name": "pt2", 00:29:54.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:54.554 "is_configured": true, 00:29:54.554 "data_offset": 2048, 00:29:54.554 "data_size": 63488 00:29:54.554 } 00:29:54.554 ] 00:29:54.554 } 00:29:54.554 } 00:29:54.554 }' 00:29:54.554 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:54.814 pt2' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.814 [2024-10-09 13:59:01.274198] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fc329aa8-eab8-45f4-9a02-ea3776a1c044 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fc329aa8-eab8-45f4-9a02-ea3776a1c044 ']' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.814 [2024-10-09 13:59:01.313942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:54.814 [2024-10-09 13:59:01.313977] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:54.814 [2024-10-09 13:59:01.314058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:54.814 [2024-10-09 13:59:01.314112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:54.814 [2024-10-09 13:59:01.314133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.814 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.074 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.074 [2024-10-09 13:59:01.434025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:55.074 [2024-10-09 13:59:01.436473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:55.074 [2024-10-09 13:59:01.436574] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:55.074 [2024-10-09 13:59:01.436624] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:55.074 [2024-10-09 13:59:01.436646] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:55.074 [2024-10-09 13:59:01.436664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:29:55.074 request: 00:29:55.074 { 00:29:55.074 "name": "raid_bdev1", 00:29:55.074 "raid_level": "concat", 00:29:55.074 "base_bdevs": [ 00:29:55.074 "malloc1", 00:29:55.074 "malloc2" 00:29:55.074 ], 00:29:55.074 "strip_size_kb": 64, 00:29:55.074 "superblock": false, 00:29:55.074 "method": "bdev_raid_create", 00:29:55.074 "req_id": 1 00:29:55.074 } 00:29:55.075 Got JSON-RPC error response 00:29:55.075 response: 00:29:55.075 { 00:29:55.075 "code": -17, 00:29:55.075 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:55.075 } 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.075 [2024-10-09 13:59:01.494000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:55.075 [2024-10-09 13:59:01.494056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.075 [2024-10-09 13:59:01.494083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:55.075 [2024-10-09 13:59:01.494096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.075 [2024-10-09 13:59:01.496916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.075 [2024-10-09 13:59:01.496955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:55.075 [2024-10-09 13:59:01.497033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:55.075 [2024-10-09 13:59:01.497096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:55.075 pt1 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.075 "name": "raid_bdev1", 00:29:55.075 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:55.075 "strip_size_kb": 64, 00:29:55.075 "state": "configuring", 00:29:55.075 "raid_level": "concat", 00:29:55.075 "superblock": true, 00:29:55.075 "num_base_bdevs": 2, 00:29:55.075 "num_base_bdevs_discovered": 1, 00:29:55.075 "num_base_bdevs_operational": 2, 00:29:55.075 "base_bdevs_list": [ 00:29:55.075 { 00:29:55.075 "name": "pt1", 00:29:55.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:55.075 "is_configured": true, 00:29:55.075 "data_offset": 2048, 00:29:55.075 "data_size": 63488 00:29:55.075 }, 00:29:55.075 { 00:29:55.075 "name": null, 00:29:55.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:55.075 "is_configured": false, 00:29:55.075 "data_offset": 2048, 00:29:55.075 "data_size": 63488 00:29:55.075 } 00:29:55.075 ] 00:29:55.075 }' 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.075 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 [2024-10-09 13:59:01.966147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:55.643 [2024-10-09 13:59:01.966223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.643 [2024-10-09 13:59:01.966254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:55.643 [2024-10-09 13:59:01.966267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.643 [2024-10-09 13:59:01.966741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.643 [2024-10-09 13:59:01.966770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:55.643 [2024-10-09 13:59:01.966866] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:55.643 [2024-10-09 13:59:01.966893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:55.643 [2024-10-09 13:59:01.966983] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:55.643 [2024-10-09 13:59:01.966999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:55.643 [2024-10-09 13:59:01.967243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:55.643 [2024-10-09 13:59:01.967354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:55.643 [2024-10-09 13:59:01.967377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:29:55.643 [2024-10-09 13:59:01.967478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:55.643 pt2 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 13:59:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.643 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.643 "name": "raid_bdev1", 00:29:55.643 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:55.643 "strip_size_kb": 64, 00:29:55.643 "state": "online", 00:29:55.643 "raid_level": "concat", 00:29:55.643 "superblock": true, 00:29:55.643 "num_base_bdevs": 2, 00:29:55.643 "num_base_bdevs_discovered": 2, 00:29:55.643 "num_base_bdevs_operational": 2, 00:29:55.643 "base_bdevs_list": [ 00:29:55.643 { 00:29:55.643 "name": "pt1", 00:29:55.643 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:55.644 "is_configured": true, 00:29:55.644 "data_offset": 2048, 00:29:55.644 "data_size": 63488 00:29:55.644 }, 00:29:55.644 { 00:29:55.644 "name": "pt2", 00:29:55.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:55.644 "is_configured": true, 00:29:55.644 "data_offset": 2048, 00:29:55.644 "data_size": 63488 00:29:55.644 } 00:29:55.644 ] 00:29:55.644 }' 00:29:55.644 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.644 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.902 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.902 [2024-10-09 13:59:02.438596] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.161 "name": "raid_bdev1", 00:29:56.161 "aliases": [ 00:29:56.161 "fc329aa8-eab8-45f4-9a02-ea3776a1c044" 00:29:56.161 ], 00:29:56.161 "product_name": "Raid Volume", 00:29:56.161 "block_size": 512, 00:29:56.161 "num_blocks": 126976, 00:29:56.161 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:56.161 "assigned_rate_limits": { 00:29:56.161 "rw_ios_per_sec": 0, 00:29:56.161 "rw_mbytes_per_sec": 0, 00:29:56.161 "r_mbytes_per_sec": 0, 00:29:56.161 "w_mbytes_per_sec": 0 00:29:56.161 }, 00:29:56.161 "claimed": false, 00:29:56.161 "zoned": false, 00:29:56.161 "supported_io_types": { 00:29:56.161 "read": true, 00:29:56.161 "write": true, 00:29:56.161 "unmap": true, 00:29:56.161 "flush": true, 00:29:56.161 "reset": true, 00:29:56.161 "nvme_admin": false, 00:29:56.161 "nvme_io": false, 00:29:56.161 "nvme_io_md": false, 00:29:56.161 "write_zeroes": true, 00:29:56.161 "zcopy": false, 00:29:56.161 "get_zone_info": false, 00:29:56.161 "zone_management": false, 00:29:56.161 "zone_append": false, 00:29:56.161 "compare": false, 00:29:56.161 "compare_and_write": false, 00:29:56.161 "abort": false, 00:29:56.161 "seek_hole": false, 00:29:56.161 "seek_data": false, 00:29:56.161 "copy": false, 00:29:56.161 "nvme_iov_md": false 00:29:56.161 }, 00:29:56.161 "memory_domains": [ 00:29:56.161 { 00:29:56.161 "dma_device_id": "system", 00:29:56.161 "dma_device_type": 1 00:29:56.161 }, 00:29:56.161 { 00:29:56.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.161 "dma_device_type": 2 00:29:56.161 }, 00:29:56.161 { 00:29:56.161 "dma_device_id": "system", 00:29:56.161 "dma_device_type": 1 00:29:56.161 }, 00:29:56.161 { 00:29:56.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.161 "dma_device_type": 2 00:29:56.161 } 00:29:56.161 ], 00:29:56.161 "driver_specific": { 00:29:56.161 "raid": { 00:29:56.161 "uuid": "fc329aa8-eab8-45f4-9a02-ea3776a1c044", 00:29:56.161 "strip_size_kb": 64, 00:29:56.161 "state": "online", 00:29:56.161 "raid_level": "concat", 00:29:56.161 "superblock": true, 00:29:56.161 "num_base_bdevs": 2, 00:29:56.161 "num_base_bdevs_discovered": 2, 00:29:56.161 "num_base_bdevs_operational": 2, 00:29:56.161 "base_bdevs_list": [ 00:29:56.161 { 00:29:56.161 "name": "pt1", 00:29:56.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:56.161 "is_configured": true, 00:29:56.161 "data_offset": 2048, 00:29:56.161 "data_size": 63488 00:29:56.161 }, 00:29:56.161 { 00:29:56.161 "name": "pt2", 00:29:56.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:56.161 "is_configured": true, 00:29:56.161 "data_offset": 2048, 00:29:56.161 "data_size": 63488 00:29:56.161 } 00:29:56.161 ] 00:29:56.161 } 00:29:56.161 } 00:29:56.161 }' 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:56.161 pt2' 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:56.161 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.162 [2024-10-09 13:59:02.670619] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:56.162 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fc329aa8-eab8-45f4-9a02-ea3776a1c044 '!=' fc329aa8-eab8-45f4-9a02-ea3776a1c044 ']' 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73825 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73825 ']' 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73825 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73825 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.421 killing process with pid 73825 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73825' 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73825 00:29:56.421 [2024-10-09 13:59:02.764734] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:56.421 [2024-10-09 13:59:02.764820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:56.421 13:59:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73825 00:29:56.421 [2024-10-09 13:59:02.764879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:56.421 [2024-10-09 13:59:02.764893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:29:56.421 [2024-10-09 13:59:02.791228] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:56.682 13:59:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:56.682 00:29:56.682 real 0m3.557s 00:29:56.682 user 0m5.492s 00:29:56.682 sys 0m0.839s 00:29:56.682 13:59:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.683 13:59:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.683 ************************************ 00:29:56.683 END TEST raid_superblock_test 00:29:56.683 ************************************ 00:29:56.683 13:59:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:29:56.683 13:59:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:56.683 13:59:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:56.683 13:59:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:56.683 ************************************ 00:29:56.683 START TEST raid_read_error_test 00:29:56.683 ************************************ 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LCxfbGJgdH 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74025 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74025 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74025 ']' 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:56.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:56.683 13:59:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.942 [2024-10-09 13:59:03.234134] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:56.942 [2024-10-09 13:59:03.234320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74025 ] 00:29:56.942 [2024-10-09 13:59:03.420274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.942 [2024-10-09 13:59:03.474480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.200 [2024-10-09 13:59:03.521531] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:57.200 [2024-10-09 13:59:03.521586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.768 BaseBdev1_malloc 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.768 true 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.768 [2024-10-09 13:59:04.288309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:57.768 [2024-10-09 13:59:04.288375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.768 [2024-10-09 13:59:04.288426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:57.768 [2024-10-09 13:59:04.288440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.768 [2024-10-09 13:59:04.291434] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.768 [2024-10-09 13:59:04.291481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:57.768 BaseBdev1 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:57.768 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.027 BaseBdev2_malloc 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.027 true 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.027 [2024-10-09 13:59:04.333117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:58.027 [2024-10-09 13:59:04.333177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:58.027 [2024-10-09 13:59:04.333202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:58.027 [2024-10-09 13:59:04.333216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:58.027 [2024-10-09 13:59:04.336210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:58.027 [2024-10-09 13:59:04.336254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:58.027 BaseBdev2 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.027 [2024-10-09 13:59:04.341159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:58.027 [2024-10-09 13:59:04.343844] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:58.027 [2024-10-09 13:59:04.344040] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:29:58.027 [2024-10-09 13:59:04.344056] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:29:58.027 [2024-10-09 13:59:04.344374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:29:58.027 [2024-10-09 13:59:04.344521] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:29:58.027 [2024-10-09 13:59:04.344539] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:29:58.027 [2024-10-09 13:59:04.344748] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:58.027 "name": "raid_bdev1", 00:29:58.027 "uuid": "91676649-db53-46cd-bff3-aa4ab5df9a4f", 00:29:58.027 "strip_size_kb": 64, 00:29:58.027 "state": "online", 00:29:58.027 "raid_level": "concat", 00:29:58.027 "superblock": true, 00:29:58.027 "num_base_bdevs": 2, 00:29:58.027 "num_base_bdevs_discovered": 2, 00:29:58.027 "num_base_bdevs_operational": 2, 00:29:58.027 "base_bdevs_list": [ 00:29:58.027 { 00:29:58.027 "name": "BaseBdev1", 00:29:58.027 "uuid": "7cb3143d-a0b6-5ca6-b5ed-3ca9a858c029", 00:29:58.027 "is_configured": true, 00:29:58.027 "data_offset": 2048, 00:29:58.027 "data_size": 63488 00:29:58.027 }, 00:29:58.027 { 00:29:58.027 "name": "BaseBdev2", 00:29:58.027 "uuid": "1affbaa2-1c97-5345-b065-88b4a3905cc4", 00:29:58.027 "is_configured": true, 00:29:58.027 "data_offset": 2048, 00:29:58.027 "data_size": 63488 00:29:58.027 } 00:29:58.027 ] 00:29:58.027 }' 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:58.027 13:59:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.286 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:58.286 13:59:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:58.545 [2024-10-09 13:59:04.933820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.490 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:59.490 "name": "raid_bdev1", 00:29:59.490 "uuid": "91676649-db53-46cd-bff3-aa4ab5df9a4f", 00:29:59.490 "strip_size_kb": 64, 00:29:59.490 "state": "online", 00:29:59.490 "raid_level": "concat", 00:29:59.490 "superblock": true, 00:29:59.490 "num_base_bdevs": 2, 00:29:59.490 "num_base_bdevs_discovered": 2, 00:29:59.490 "num_base_bdevs_operational": 2, 00:29:59.490 "base_bdevs_list": [ 00:29:59.490 { 00:29:59.490 "name": "BaseBdev1", 00:29:59.490 "uuid": "7cb3143d-a0b6-5ca6-b5ed-3ca9a858c029", 00:29:59.490 "is_configured": true, 00:29:59.490 "data_offset": 2048, 00:29:59.490 "data_size": 63488 00:29:59.490 }, 00:29:59.490 { 00:29:59.490 "name": "BaseBdev2", 00:29:59.490 "uuid": "1affbaa2-1c97-5345-b065-88b4a3905cc4", 00:29:59.490 "is_configured": true, 00:29:59.490 "data_offset": 2048, 00:29:59.491 "data_size": 63488 00:29:59.491 } 00:29:59.491 ] 00:29:59.491 }' 00:29:59.491 13:59:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:59.491 13:59:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.750 [2024-10-09 13:59:06.289287] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:59.750 [2024-10-09 13:59:06.289318] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:59.750 [2024-10-09 13:59:06.291956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:59.750 [2024-10-09 13:59:06.292000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.750 [2024-10-09 13:59:06.292040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:59.750 [2024-10-09 13:59:06.292059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:29:59.750 { 00:29:59.750 "results": [ 00:29:59.750 { 00:29:59.750 "job": "raid_bdev1", 00:29:59.750 "core_mask": "0x1", 00:29:59.750 "workload": "randrw", 00:29:59.750 "percentage": 50, 00:29:59.750 "status": "finished", 00:29:59.750 "queue_depth": 1, 00:29:59.750 "io_size": 131072, 00:29:59.750 "runtime": 1.352751, 00:29:59.750 "iops": 14694.500318240385, 00:29:59.750 "mibps": 1836.8125397800482, 00:29:59.750 "io_failed": 1, 00:29:59.750 "io_timeout": 0, 00:29:59.750 "avg_latency_us": 93.70353112521228, 00:29:59.750 "min_latency_us": 26.697142857142858, 00:29:59.750 "max_latency_us": 1771.032380952381 00:29:59.750 } 00:29:59.750 ], 00:29:59.750 "core_count": 1 00:29:59.750 } 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74025 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74025 ']' 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74025 00:29:59.750 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74025 00:30:00.009 killing process with pid 74025 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74025' 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74025 00:30:00.009 [2024-10-09 13:59:06.339103] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:00.009 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74025 00:30:00.009 [2024-10-09 13:59:06.355037] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LCxfbGJgdH 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:30:00.267 00:30:00.267 real 0m3.506s 00:30:00.267 user 0m4.574s 00:30:00.267 sys 0m0.604s 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:00.267 13:59:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.267 ************************************ 00:30:00.267 END TEST raid_read_error_test 00:30:00.267 ************************************ 00:30:00.267 13:59:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:30:00.267 13:59:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:00.267 13:59:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:00.268 13:59:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:00.268 ************************************ 00:30:00.268 START TEST raid_write_error_test 00:30:00.268 ************************************ 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JT04yTZmQK 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74160 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74160 00:30:00.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74160 ']' 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:00.268 13:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.268 [2024-10-09 13:59:06.799542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:00.268 [2024-10-09 13:59:06.799983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74160 ] 00:30:00.526 [2024-10-09 13:59:06.979828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.526 [2024-10-09 13:59:07.025507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.526 [2024-10-09 13:59:07.069837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:00.526 [2024-10-09 13:59:07.070046] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 BaseBdev1_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 true 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 [2024-10-09 13:59:07.794924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:01.523 [2024-10-09 13:59:07.795000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:01.523 [2024-10-09 13:59:07.795036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:01.523 [2024-10-09 13:59:07.795053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:01.523 [2024-10-09 13:59:07.798051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:01.523 [2024-10-09 13:59:07.798098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:01.523 BaseBdev1 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 BaseBdev2_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 true 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 [2024-10-09 13:59:07.833777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:01.523 [2024-10-09 13:59:07.833846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:01.523 [2024-10-09 13:59:07.833871] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:01.523 [2024-10-09 13:59:07.833884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:01.523 [2024-10-09 13:59:07.836664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:01.523 [2024-10-09 13:59:07.836859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:01.523 BaseBdev2 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 [2024-10-09 13:59:07.841842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:01.523 [2024-10-09 13:59:07.844283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:01.523 [2024-10-09 13:59:07.844624] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:01.523 [2024-10-09 13:59:07.844646] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:01.523 [2024-10-09 13:59:07.844959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:01.523 [2024-10-09 13:59:07.845087] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:01.523 [2024-10-09 13:59:07.845105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:30:01.523 [2024-10-09 13:59:07.845251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:01.523 "name": "raid_bdev1", 00:30:01.523 "uuid": "30fef01a-9591-405b-bf53-2158a5953edb", 00:30:01.523 "strip_size_kb": 64, 00:30:01.523 "state": "online", 00:30:01.523 "raid_level": "concat", 00:30:01.523 "superblock": true, 00:30:01.523 "num_base_bdevs": 2, 00:30:01.523 "num_base_bdevs_discovered": 2, 00:30:01.523 "num_base_bdevs_operational": 2, 00:30:01.523 "base_bdevs_list": [ 00:30:01.523 { 00:30:01.523 "name": "BaseBdev1", 00:30:01.523 "uuid": "cdbb0d2a-9902-5501-95a9-273007c64e17", 00:30:01.523 "is_configured": true, 00:30:01.523 "data_offset": 2048, 00:30:01.523 "data_size": 63488 00:30:01.523 }, 00:30:01.523 { 00:30:01.523 "name": "BaseBdev2", 00:30:01.523 "uuid": "26072dda-14b8-50b4-8e7b-09d18ad475ba", 00:30:01.523 "is_configured": true, 00:30:01.523 "data_offset": 2048, 00:30:01.523 "data_size": 63488 00:30:01.523 } 00:30:01.523 ] 00:30:01.523 }' 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:01.523 13:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.781 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:01.781 13:59:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:02.039 [2024-10-09 13:59:08.422356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.976 "name": "raid_bdev1", 00:30:02.976 "uuid": "30fef01a-9591-405b-bf53-2158a5953edb", 00:30:02.976 "strip_size_kb": 64, 00:30:02.976 "state": "online", 00:30:02.976 "raid_level": "concat", 00:30:02.976 "superblock": true, 00:30:02.976 "num_base_bdevs": 2, 00:30:02.976 "num_base_bdevs_discovered": 2, 00:30:02.976 "num_base_bdevs_operational": 2, 00:30:02.976 "base_bdevs_list": [ 00:30:02.976 { 00:30:02.976 "name": "BaseBdev1", 00:30:02.976 "uuid": "cdbb0d2a-9902-5501-95a9-273007c64e17", 00:30:02.976 "is_configured": true, 00:30:02.976 "data_offset": 2048, 00:30:02.976 "data_size": 63488 00:30:02.976 }, 00:30:02.976 { 00:30:02.976 "name": "BaseBdev2", 00:30:02.976 "uuid": "26072dda-14b8-50b4-8e7b-09d18ad475ba", 00:30:02.976 "is_configured": true, 00:30:02.976 "data_offset": 2048, 00:30:02.976 "data_size": 63488 00:30:02.976 } 00:30:02.976 ] 00:30:02.976 }' 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.976 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.235 [2024-10-09 13:59:09.765002] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:03.235 [2024-10-09 13:59:09.765197] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:03.235 [2024-10-09 13:59:09.768051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:03.235 [2024-10-09 13:59:09.768095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.235 [2024-10-09 13:59:09.768130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:03.235 [2024-10-09 13:59:09.768141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:30:03.235 { 00:30:03.235 "results": [ 00:30:03.235 { 00:30:03.235 "job": "raid_bdev1", 00:30:03.235 "core_mask": "0x1", 00:30:03.235 "workload": "randrw", 00:30:03.235 "percentage": 50, 00:30:03.235 "status": "finished", 00:30:03.235 "queue_depth": 1, 00:30:03.235 "io_size": 131072, 00:30:03.235 "runtime": 1.340425, 00:30:03.235 "iops": 15704.720517746237, 00:30:03.235 "mibps": 1963.0900647182796, 00:30:03.235 "io_failed": 1, 00:30:03.235 "io_timeout": 0, 00:30:03.235 "avg_latency_us": 87.91570261393557, 00:30:03.235 "min_latency_us": 26.453333333333333, 00:30:03.235 "max_latency_us": 1451.1542857142856 00:30:03.235 } 00:30:03.235 ], 00:30:03.235 "core_count": 1 00:30:03.235 } 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74160 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74160 ']' 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74160 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:03.235 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74160 00:30:03.494 killing process with pid 74160 00:30:03.494 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:03.494 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:03.494 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74160' 00:30:03.494 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74160 00:30:03.494 [2024-10-09 13:59:09.812920] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:03.494 13:59:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74160 00:30:03.494 [2024-10-09 13:59:09.829200] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JT04yTZmQK 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:03.753 ************************************ 00:30:03.753 END TEST raid_write_error_test 00:30:03.753 ************************************ 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:30:03.753 00:30:03.753 real 0m3.417s 00:30:03.753 user 0m4.423s 00:30:03.753 sys 0m0.579s 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:03.753 13:59:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.753 13:59:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:30:03.753 13:59:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:30:03.753 13:59:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:03.753 13:59:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:03.753 13:59:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:03.753 ************************************ 00:30:03.753 START TEST raid_state_function_test 00:30:03.753 ************************************ 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74287 00:30:03.753 Process raid pid: 74287 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74287' 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74287 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74287 ']' 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:03.753 13:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.753 [2024-10-09 13:59:10.273368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:03.753 [2024-10-09 13:59:10.273584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.012 [2024-10-09 13:59:10.456780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.012 [2024-10-09 13:59:10.505226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.012 [2024-10-09 13:59:10.549749] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:04.012 [2024-10-09 13:59:10.549786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:04.946 [2024-10-09 13:59:11.249166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:04.946 [2024-10-09 13:59:11.249228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:04.946 [2024-10-09 13:59:11.249244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:04.946 [2024-10-09 13:59:11.249259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:04.946 "name": "Existed_Raid", 00:30:04.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.946 "strip_size_kb": 0, 00:30:04.946 "state": "configuring", 00:30:04.946 "raid_level": "raid1", 00:30:04.946 "superblock": false, 00:30:04.946 "num_base_bdevs": 2, 00:30:04.946 "num_base_bdevs_discovered": 0, 00:30:04.946 "num_base_bdevs_operational": 2, 00:30:04.946 "base_bdevs_list": [ 00:30:04.946 { 00:30:04.946 "name": "BaseBdev1", 00:30:04.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.946 "is_configured": false, 00:30:04.946 "data_offset": 0, 00:30:04.946 "data_size": 0 00:30:04.946 }, 00:30:04.946 { 00:30:04.946 "name": "BaseBdev2", 00:30:04.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.946 "is_configured": false, 00:30:04.946 "data_offset": 0, 00:30:04.946 "data_size": 0 00:30:04.946 } 00:30:04.946 ] 00:30:04.946 }' 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:04.946 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [2024-10-09 13:59:11.693190] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:05.206 [2024-10-09 13:59:11.693244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [2024-10-09 13:59:11.701212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:05.206 [2024-10-09 13:59:11.701259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:05.206 [2024-10-09 13:59:11.701269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:05.206 [2024-10-09 13:59:11.701282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [2024-10-09 13:59:11.718853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:05.206 BaseBdev1 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.206 [ 00:30:05.206 { 00:30:05.206 "name": "BaseBdev1", 00:30:05.206 "aliases": [ 00:30:05.206 "abf02091-c8f1-4612-b978-70ed0e9f2d6b" 00:30:05.206 ], 00:30:05.206 "product_name": "Malloc disk", 00:30:05.206 "block_size": 512, 00:30:05.206 "num_blocks": 65536, 00:30:05.206 "uuid": "abf02091-c8f1-4612-b978-70ed0e9f2d6b", 00:30:05.206 "assigned_rate_limits": { 00:30:05.206 "rw_ios_per_sec": 0, 00:30:05.206 "rw_mbytes_per_sec": 0, 00:30:05.206 "r_mbytes_per_sec": 0, 00:30:05.206 "w_mbytes_per_sec": 0 00:30:05.206 }, 00:30:05.206 "claimed": true, 00:30:05.206 "claim_type": "exclusive_write", 00:30:05.206 "zoned": false, 00:30:05.206 "supported_io_types": { 00:30:05.206 "read": true, 00:30:05.206 "write": true, 00:30:05.206 "unmap": true, 00:30:05.206 "flush": true, 00:30:05.206 "reset": true, 00:30:05.206 "nvme_admin": false, 00:30:05.206 "nvme_io": false, 00:30:05.206 "nvme_io_md": false, 00:30:05.206 "write_zeroes": true, 00:30:05.206 "zcopy": true, 00:30:05.206 "get_zone_info": false, 00:30:05.206 "zone_management": false, 00:30:05.206 "zone_append": false, 00:30:05.206 "compare": false, 00:30:05.206 "compare_and_write": false, 00:30:05.206 "abort": true, 00:30:05.206 "seek_hole": false, 00:30:05.206 "seek_data": false, 00:30:05.206 "copy": true, 00:30:05.206 "nvme_iov_md": false 00:30:05.206 }, 00:30:05.206 "memory_domains": [ 00:30:05.206 { 00:30:05.206 "dma_device_id": "system", 00:30:05.206 "dma_device_type": 1 00:30:05.206 }, 00:30:05.206 { 00:30:05.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:05.206 "dma_device_type": 2 00:30:05.206 } 00:30:05.206 ], 00:30:05.206 "driver_specific": {} 00:30:05.206 } 00:30:05.206 ] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:05.206 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:05.465 "name": "Existed_Raid", 00:30:05.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.465 "strip_size_kb": 0, 00:30:05.465 "state": "configuring", 00:30:05.465 "raid_level": "raid1", 00:30:05.465 "superblock": false, 00:30:05.465 "num_base_bdevs": 2, 00:30:05.465 "num_base_bdevs_discovered": 1, 00:30:05.465 "num_base_bdevs_operational": 2, 00:30:05.465 "base_bdevs_list": [ 00:30:05.465 { 00:30:05.465 "name": "BaseBdev1", 00:30:05.465 "uuid": "abf02091-c8f1-4612-b978-70ed0e9f2d6b", 00:30:05.465 "is_configured": true, 00:30:05.465 "data_offset": 0, 00:30:05.465 "data_size": 65536 00:30:05.465 }, 00:30:05.465 { 00:30:05.465 "name": "BaseBdev2", 00:30:05.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.465 "is_configured": false, 00:30:05.465 "data_offset": 0, 00:30:05.465 "data_size": 0 00:30:05.465 } 00:30:05.465 ] 00:30:05.465 }' 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:05.465 13:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.724 [2024-10-09 13:59:12.187011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:05.724 [2024-10-09 13:59:12.187073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.724 [2024-10-09 13:59:12.199050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:05.724 [2024-10-09 13:59:12.201369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:05.724 [2024-10-09 13:59:12.201518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:05.724 "name": "Existed_Raid", 00:30:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.724 "strip_size_kb": 0, 00:30:05.724 "state": "configuring", 00:30:05.724 "raid_level": "raid1", 00:30:05.724 "superblock": false, 00:30:05.724 "num_base_bdevs": 2, 00:30:05.724 "num_base_bdevs_discovered": 1, 00:30:05.724 "num_base_bdevs_operational": 2, 00:30:05.724 "base_bdevs_list": [ 00:30:05.724 { 00:30:05.724 "name": "BaseBdev1", 00:30:05.724 "uuid": "abf02091-c8f1-4612-b978-70ed0e9f2d6b", 00:30:05.724 "is_configured": true, 00:30:05.724 "data_offset": 0, 00:30:05.724 "data_size": 65536 00:30:05.724 }, 00:30:05.724 { 00:30:05.724 "name": "BaseBdev2", 00:30:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.724 "is_configured": false, 00:30:05.724 "data_offset": 0, 00:30:05.724 "data_size": 0 00:30:05.724 } 00:30:05.724 ] 00:30:05.724 }' 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:05.724 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.290 [2024-10-09 13:59:12.670100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:06.290 [2024-10-09 13:59:12.670156] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:06.290 [2024-10-09 13:59:12.670168] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:06.290 [2024-10-09 13:59:12.670554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:06.290 [2024-10-09 13:59:12.670754] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:06.290 [2024-10-09 13:59:12.670777] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:30:06.290 [2024-10-09 13:59:12.671033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.290 BaseBdev2 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.290 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.290 [ 00:30:06.290 { 00:30:06.290 "name": "BaseBdev2", 00:30:06.290 "aliases": [ 00:30:06.290 "3ae4daf7-727d-4753-b74c-96ac465e1197" 00:30:06.290 ], 00:30:06.290 "product_name": "Malloc disk", 00:30:06.290 "block_size": 512, 00:30:06.290 "num_blocks": 65536, 00:30:06.290 "uuid": "3ae4daf7-727d-4753-b74c-96ac465e1197", 00:30:06.290 "assigned_rate_limits": { 00:30:06.290 "rw_ios_per_sec": 0, 00:30:06.290 "rw_mbytes_per_sec": 0, 00:30:06.290 "r_mbytes_per_sec": 0, 00:30:06.290 "w_mbytes_per_sec": 0 00:30:06.290 }, 00:30:06.290 "claimed": true, 00:30:06.290 "claim_type": "exclusive_write", 00:30:06.290 "zoned": false, 00:30:06.290 "supported_io_types": { 00:30:06.290 "read": true, 00:30:06.290 "write": true, 00:30:06.290 "unmap": true, 00:30:06.290 "flush": true, 00:30:06.290 "reset": true, 00:30:06.290 "nvme_admin": false, 00:30:06.290 "nvme_io": false, 00:30:06.290 "nvme_io_md": false, 00:30:06.290 "write_zeroes": true, 00:30:06.290 "zcopy": true, 00:30:06.290 "get_zone_info": false, 00:30:06.290 "zone_management": false, 00:30:06.291 "zone_append": false, 00:30:06.291 "compare": false, 00:30:06.291 "compare_and_write": false, 00:30:06.291 "abort": true, 00:30:06.291 "seek_hole": false, 00:30:06.291 "seek_data": false, 00:30:06.291 "copy": true, 00:30:06.291 "nvme_iov_md": false 00:30:06.291 }, 00:30:06.291 "memory_domains": [ 00:30:06.291 { 00:30:06.291 "dma_device_id": "system", 00:30:06.291 "dma_device_type": 1 00:30:06.291 }, 00:30:06.291 { 00:30:06.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:06.291 "dma_device_type": 2 00:30:06.291 } 00:30:06.291 ], 00:30:06.291 "driver_specific": {} 00:30:06.291 } 00:30:06.291 ] 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:06.291 "name": "Existed_Raid", 00:30:06.291 "uuid": "74f2bc93-fc99-45f2-8fe8-9feded08ba27", 00:30:06.291 "strip_size_kb": 0, 00:30:06.291 "state": "online", 00:30:06.291 "raid_level": "raid1", 00:30:06.291 "superblock": false, 00:30:06.291 "num_base_bdevs": 2, 00:30:06.291 "num_base_bdevs_discovered": 2, 00:30:06.291 "num_base_bdevs_operational": 2, 00:30:06.291 "base_bdevs_list": [ 00:30:06.291 { 00:30:06.291 "name": "BaseBdev1", 00:30:06.291 "uuid": "abf02091-c8f1-4612-b978-70ed0e9f2d6b", 00:30:06.291 "is_configured": true, 00:30:06.291 "data_offset": 0, 00:30:06.291 "data_size": 65536 00:30:06.291 }, 00:30:06.291 { 00:30:06.291 "name": "BaseBdev2", 00:30:06.291 "uuid": "3ae4daf7-727d-4753-b74c-96ac465e1197", 00:30:06.291 "is_configured": true, 00:30:06.291 "data_offset": 0, 00:30:06.291 "data_size": 65536 00:30:06.291 } 00:30:06.291 ] 00:30:06.291 }' 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:06.291 13:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 [2024-10-09 13:59:13.150611] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.857 "name": "Existed_Raid", 00:30:06.857 "aliases": [ 00:30:06.857 "74f2bc93-fc99-45f2-8fe8-9feded08ba27" 00:30:06.857 ], 00:30:06.857 "product_name": "Raid Volume", 00:30:06.857 "block_size": 512, 00:30:06.857 "num_blocks": 65536, 00:30:06.857 "uuid": "74f2bc93-fc99-45f2-8fe8-9feded08ba27", 00:30:06.857 "assigned_rate_limits": { 00:30:06.857 "rw_ios_per_sec": 0, 00:30:06.857 "rw_mbytes_per_sec": 0, 00:30:06.857 "r_mbytes_per_sec": 0, 00:30:06.857 "w_mbytes_per_sec": 0 00:30:06.857 }, 00:30:06.857 "claimed": false, 00:30:06.857 "zoned": false, 00:30:06.857 "supported_io_types": { 00:30:06.857 "read": true, 00:30:06.857 "write": true, 00:30:06.857 "unmap": false, 00:30:06.857 "flush": false, 00:30:06.857 "reset": true, 00:30:06.857 "nvme_admin": false, 00:30:06.857 "nvme_io": false, 00:30:06.857 "nvme_io_md": false, 00:30:06.857 "write_zeroes": true, 00:30:06.857 "zcopy": false, 00:30:06.857 "get_zone_info": false, 00:30:06.857 "zone_management": false, 00:30:06.857 "zone_append": false, 00:30:06.857 "compare": false, 00:30:06.857 "compare_and_write": false, 00:30:06.857 "abort": false, 00:30:06.857 "seek_hole": false, 00:30:06.857 "seek_data": false, 00:30:06.857 "copy": false, 00:30:06.857 "nvme_iov_md": false 00:30:06.857 }, 00:30:06.857 "memory_domains": [ 00:30:06.857 { 00:30:06.857 "dma_device_id": "system", 00:30:06.857 "dma_device_type": 1 00:30:06.857 }, 00:30:06.857 { 00:30:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:06.857 "dma_device_type": 2 00:30:06.857 }, 00:30:06.857 { 00:30:06.857 "dma_device_id": "system", 00:30:06.857 "dma_device_type": 1 00:30:06.857 }, 00:30:06.857 { 00:30:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:06.857 "dma_device_type": 2 00:30:06.857 } 00:30:06.857 ], 00:30:06.857 "driver_specific": { 00:30:06.857 "raid": { 00:30:06.857 "uuid": "74f2bc93-fc99-45f2-8fe8-9feded08ba27", 00:30:06.857 "strip_size_kb": 0, 00:30:06.857 "state": "online", 00:30:06.857 "raid_level": "raid1", 00:30:06.857 "superblock": false, 00:30:06.857 "num_base_bdevs": 2, 00:30:06.857 "num_base_bdevs_discovered": 2, 00:30:06.857 "num_base_bdevs_operational": 2, 00:30:06.857 "base_bdevs_list": [ 00:30:06.857 { 00:30:06.857 "name": "BaseBdev1", 00:30:06.857 "uuid": "abf02091-c8f1-4612-b978-70ed0e9f2d6b", 00:30:06.857 "is_configured": true, 00:30:06.857 "data_offset": 0, 00:30:06.857 "data_size": 65536 00:30:06.857 }, 00:30:06.857 { 00:30:06.857 "name": "BaseBdev2", 00:30:06.857 "uuid": "3ae4daf7-727d-4753-b74c-96ac465e1197", 00:30:06.857 "is_configured": true, 00:30:06.857 "data_offset": 0, 00:30:06.857 "data_size": 65536 00:30:06.857 } 00:30:06.857 ] 00:30:06.857 } 00:30:06.857 } 00:30:06.857 }' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:06.857 BaseBdev2' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 [2024-10-09 13:59:13.366344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:06.857 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.115 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.115 "name": "Existed_Raid", 00:30:07.115 "uuid": "74f2bc93-fc99-45f2-8fe8-9feded08ba27", 00:30:07.115 "strip_size_kb": 0, 00:30:07.115 "state": "online", 00:30:07.115 "raid_level": "raid1", 00:30:07.115 "superblock": false, 00:30:07.115 "num_base_bdevs": 2, 00:30:07.115 "num_base_bdevs_discovered": 1, 00:30:07.115 "num_base_bdevs_operational": 1, 00:30:07.115 "base_bdevs_list": [ 00:30:07.115 { 00:30:07.115 "name": null, 00:30:07.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.115 "is_configured": false, 00:30:07.115 "data_offset": 0, 00:30:07.115 "data_size": 65536 00:30:07.115 }, 00:30:07.115 { 00:30:07.115 "name": "BaseBdev2", 00:30:07.115 "uuid": "3ae4daf7-727d-4753-b74c-96ac465e1197", 00:30:07.115 "is_configured": true, 00:30:07.115 "data_offset": 0, 00:30:07.115 "data_size": 65536 00:30:07.115 } 00:30:07.115 ] 00:30:07.115 }' 00:30:07.115 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.115 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.394 [2024-10-09 13:59:13.882934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:07.394 [2024-10-09 13:59:13.883047] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:07.394 [2024-10-09 13:59:13.895738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:07.394 [2024-10-09 13:59:13.895792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:07.394 [2024-10-09 13:59:13.895813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.394 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74287 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74287 ']' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74287 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74287 00:30:07.660 killing process with pid 74287 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74287' 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74287 00:30:07.660 13:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74287 00:30:07.660 [2024-10-09 13:59:13.987051] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:07.660 [2024-10-09 13:59:13.988248] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:07.918 00:30:07.918 real 0m4.080s 00:30:07.918 user 0m6.460s 00:30:07.918 sys 0m0.840s 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.918 ************************************ 00:30:07.918 END TEST raid_state_function_test 00:30:07.918 ************************************ 00:30:07.918 13:59:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:30:07.918 13:59:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:07.918 13:59:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:07.918 13:59:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:07.918 ************************************ 00:30:07.918 START TEST raid_state_function_test_sb 00:30:07.918 ************************************ 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:07.918 Process raid pid: 74529 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74529 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74529' 00:30:07.918 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74529 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74529 ']' 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:07.919 13:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.919 [2024-10-09 13:59:14.419875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:07.919 [2024-10-09 13:59:14.420069] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.177 [2024-10-09 13:59:14.601365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.177 [2024-10-09 13:59:14.648038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.177 [2024-10-09 13:59:14.692305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:08.177 [2024-10-09 13:59:14.692344] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.111 [2024-10-09 13:59:15.375920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:09.111 [2024-10-09 13:59:15.375979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:09.111 [2024-10-09 13:59:15.376004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:09.111 [2024-10-09 13:59:15.376021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.111 "name": "Existed_Raid", 00:30:09.111 "uuid": "fdc800b0-8cd5-40c5-87a8-f9c423c5cf8f", 00:30:09.111 "strip_size_kb": 0, 00:30:09.111 "state": "configuring", 00:30:09.111 "raid_level": "raid1", 00:30:09.111 "superblock": true, 00:30:09.111 "num_base_bdevs": 2, 00:30:09.111 "num_base_bdevs_discovered": 0, 00:30:09.111 "num_base_bdevs_operational": 2, 00:30:09.111 "base_bdevs_list": [ 00:30:09.111 { 00:30:09.111 "name": "BaseBdev1", 00:30:09.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.111 "is_configured": false, 00:30:09.111 "data_offset": 0, 00:30:09.111 "data_size": 0 00:30:09.111 }, 00:30:09.111 { 00:30:09.111 "name": "BaseBdev2", 00:30:09.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.111 "is_configured": false, 00:30:09.111 "data_offset": 0, 00:30:09.111 "data_size": 0 00:30:09.111 } 00:30:09.111 ] 00:30:09.111 }' 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.111 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 [2024-10-09 13:59:15.827943] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:09.369 [2024-10-09 13:59:15.827997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 [2024-10-09 13:59:15.835986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:09.369 [2024-10-09 13:59:15.836037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:09.369 [2024-10-09 13:59:15.836049] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:09.369 [2024-10-09 13:59:15.836064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 [2024-10-09 13:59:15.854179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:09.369 BaseBdev1 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.369 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.369 [ 00:30:09.369 { 00:30:09.369 "name": "BaseBdev1", 00:30:09.369 "aliases": [ 00:30:09.369 "cf7d4646-1dc8-4e83-aff4-730cceecb113" 00:30:09.369 ], 00:30:09.369 "product_name": "Malloc disk", 00:30:09.369 "block_size": 512, 00:30:09.369 "num_blocks": 65536, 00:30:09.369 "uuid": "cf7d4646-1dc8-4e83-aff4-730cceecb113", 00:30:09.369 "assigned_rate_limits": { 00:30:09.369 "rw_ios_per_sec": 0, 00:30:09.369 "rw_mbytes_per_sec": 0, 00:30:09.370 "r_mbytes_per_sec": 0, 00:30:09.370 "w_mbytes_per_sec": 0 00:30:09.370 }, 00:30:09.370 "claimed": true, 00:30:09.370 "claim_type": "exclusive_write", 00:30:09.370 "zoned": false, 00:30:09.370 "supported_io_types": { 00:30:09.370 "read": true, 00:30:09.370 "write": true, 00:30:09.370 "unmap": true, 00:30:09.370 "flush": true, 00:30:09.370 "reset": true, 00:30:09.370 "nvme_admin": false, 00:30:09.370 "nvme_io": false, 00:30:09.370 "nvme_io_md": false, 00:30:09.370 "write_zeroes": true, 00:30:09.370 "zcopy": true, 00:30:09.370 "get_zone_info": false, 00:30:09.370 "zone_management": false, 00:30:09.370 "zone_append": false, 00:30:09.370 "compare": false, 00:30:09.370 "compare_and_write": false, 00:30:09.370 "abort": true, 00:30:09.370 "seek_hole": false, 00:30:09.370 "seek_data": false, 00:30:09.370 "copy": true, 00:30:09.370 "nvme_iov_md": false 00:30:09.370 }, 00:30:09.370 "memory_domains": [ 00:30:09.370 { 00:30:09.370 "dma_device_id": "system", 00:30:09.370 "dma_device_type": 1 00:30:09.370 }, 00:30:09.370 { 00:30:09.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.370 "dma_device_type": 2 00:30:09.370 } 00:30:09.370 ], 00:30:09.370 "driver_specific": {} 00:30:09.370 } 00:30:09.370 ] 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.370 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.627 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.628 "name": "Existed_Raid", 00:30:09.628 "uuid": "a2a258fe-673f-45e6-ba70-3a06361d1129", 00:30:09.628 "strip_size_kb": 0, 00:30:09.628 "state": "configuring", 00:30:09.628 "raid_level": "raid1", 00:30:09.628 "superblock": true, 00:30:09.628 "num_base_bdevs": 2, 00:30:09.628 "num_base_bdevs_discovered": 1, 00:30:09.628 "num_base_bdevs_operational": 2, 00:30:09.628 "base_bdevs_list": [ 00:30:09.628 { 00:30:09.628 "name": "BaseBdev1", 00:30:09.628 "uuid": "cf7d4646-1dc8-4e83-aff4-730cceecb113", 00:30:09.628 "is_configured": true, 00:30:09.628 "data_offset": 2048, 00:30:09.628 "data_size": 63488 00:30:09.628 }, 00:30:09.628 { 00:30:09.628 "name": "BaseBdev2", 00:30:09.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.628 "is_configured": false, 00:30:09.628 "data_offset": 0, 00:30:09.628 "data_size": 0 00:30:09.628 } 00:30:09.628 ] 00:30:09.628 }' 00:30:09.628 13:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.628 13:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.886 [2024-10-09 13:59:16.330362] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:09.886 [2024-10-09 13:59:16.330420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.886 [2024-10-09 13:59:16.342364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:09.886 [2024-10-09 13:59:16.344610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:09.886 [2024-10-09 13:59:16.344777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.886 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.886 "name": "Existed_Raid", 00:30:09.886 "uuid": "70541627-6ded-4636-a550-5f214029dadc", 00:30:09.886 "strip_size_kb": 0, 00:30:09.886 "state": "configuring", 00:30:09.886 "raid_level": "raid1", 00:30:09.886 "superblock": true, 00:30:09.886 "num_base_bdevs": 2, 00:30:09.886 "num_base_bdevs_discovered": 1, 00:30:09.886 "num_base_bdevs_operational": 2, 00:30:09.886 "base_bdevs_list": [ 00:30:09.886 { 00:30:09.886 "name": "BaseBdev1", 00:30:09.887 "uuid": "cf7d4646-1dc8-4e83-aff4-730cceecb113", 00:30:09.887 "is_configured": true, 00:30:09.887 "data_offset": 2048, 00:30:09.887 "data_size": 63488 00:30:09.887 }, 00:30:09.887 { 00:30:09.887 "name": "BaseBdev2", 00:30:09.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.887 "is_configured": false, 00:30:09.887 "data_offset": 0, 00:30:09.887 "data_size": 0 00:30:09.887 } 00:30:09.887 ] 00:30:09.887 }' 00:30:09.887 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.887 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.454 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:10.454 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.454 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.454 [2024-10-09 13:59:16.812042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:10.455 [2024-10-09 13:59:16.812416] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:10.455 [2024-10-09 13:59:16.812471] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:10.455 [2024-10-09 13:59:16.812886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:10.455 BaseBdev2 00:30:10.455 [2024-10-09 13:59:16.813134] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:10.455 [2024-10-09 13:59:16.813168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:30:10.455 [2024-10-09 13:59:16.813293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.455 [ 00:30:10.455 { 00:30:10.455 "name": "BaseBdev2", 00:30:10.455 "aliases": [ 00:30:10.455 "effbbf09-b825-46f8-8f29-56b07b34472b" 00:30:10.455 ], 00:30:10.455 "product_name": "Malloc disk", 00:30:10.455 "block_size": 512, 00:30:10.455 "num_blocks": 65536, 00:30:10.455 "uuid": "effbbf09-b825-46f8-8f29-56b07b34472b", 00:30:10.455 "assigned_rate_limits": { 00:30:10.455 "rw_ios_per_sec": 0, 00:30:10.455 "rw_mbytes_per_sec": 0, 00:30:10.455 "r_mbytes_per_sec": 0, 00:30:10.455 "w_mbytes_per_sec": 0 00:30:10.455 }, 00:30:10.455 "claimed": true, 00:30:10.455 "claim_type": "exclusive_write", 00:30:10.455 "zoned": false, 00:30:10.455 "supported_io_types": { 00:30:10.455 "read": true, 00:30:10.455 "write": true, 00:30:10.455 "unmap": true, 00:30:10.455 "flush": true, 00:30:10.455 "reset": true, 00:30:10.455 "nvme_admin": false, 00:30:10.455 "nvme_io": false, 00:30:10.455 "nvme_io_md": false, 00:30:10.455 "write_zeroes": true, 00:30:10.455 "zcopy": true, 00:30:10.455 "get_zone_info": false, 00:30:10.455 "zone_management": false, 00:30:10.455 "zone_append": false, 00:30:10.455 "compare": false, 00:30:10.455 "compare_and_write": false, 00:30:10.455 "abort": true, 00:30:10.455 "seek_hole": false, 00:30:10.455 "seek_data": false, 00:30:10.455 "copy": true, 00:30:10.455 "nvme_iov_md": false 00:30:10.455 }, 00:30:10.455 "memory_domains": [ 00:30:10.455 { 00:30:10.455 "dma_device_id": "system", 00:30:10.455 "dma_device_type": 1 00:30:10.455 }, 00:30:10.455 { 00:30:10.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:10.455 "dma_device_type": 2 00:30:10.455 } 00:30:10.455 ], 00:30:10.455 "driver_specific": {} 00:30:10.455 } 00:30:10.455 ] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:10.455 "name": "Existed_Raid", 00:30:10.455 "uuid": "70541627-6ded-4636-a550-5f214029dadc", 00:30:10.455 "strip_size_kb": 0, 00:30:10.455 "state": "online", 00:30:10.455 "raid_level": "raid1", 00:30:10.455 "superblock": true, 00:30:10.455 "num_base_bdevs": 2, 00:30:10.455 "num_base_bdevs_discovered": 2, 00:30:10.455 "num_base_bdevs_operational": 2, 00:30:10.455 "base_bdevs_list": [ 00:30:10.455 { 00:30:10.455 "name": "BaseBdev1", 00:30:10.455 "uuid": "cf7d4646-1dc8-4e83-aff4-730cceecb113", 00:30:10.455 "is_configured": true, 00:30:10.455 "data_offset": 2048, 00:30:10.455 "data_size": 63488 00:30:10.455 }, 00:30:10.455 { 00:30:10.455 "name": "BaseBdev2", 00:30:10.455 "uuid": "effbbf09-b825-46f8-8f29-56b07b34472b", 00:30:10.455 "is_configured": true, 00:30:10.455 "data_offset": 2048, 00:30:10.455 "data_size": 63488 00:30:10.455 } 00:30:10.455 ] 00:30:10.455 }' 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:10.455 13:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:11.023 [2024-10-09 13:59:17.292476] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.023 "name": "Existed_Raid", 00:30:11.023 "aliases": [ 00:30:11.023 "70541627-6ded-4636-a550-5f214029dadc" 00:30:11.023 ], 00:30:11.023 "product_name": "Raid Volume", 00:30:11.023 "block_size": 512, 00:30:11.023 "num_blocks": 63488, 00:30:11.023 "uuid": "70541627-6ded-4636-a550-5f214029dadc", 00:30:11.023 "assigned_rate_limits": { 00:30:11.023 "rw_ios_per_sec": 0, 00:30:11.023 "rw_mbytes_per_sec": 0, 00:30:11.023 "r_mbytes_per_sec": 0, 00:30:11.023 "w_mbytes_per_sec": 0 00:30:11.023 }, 00:30:11.023 "claimed": false, 00:30:11.023 "zoned": false, 00:30:11.023 "supported_io_types": { 00:30:11.023 "read": true, 00:30:11.023 "write": true, 00:30:11.023 "unmap": false, 00:30:11.023 "flush": false, 00:30:11.023 "reset": true, 00:30:11.023 "nvme_admin": false, 00:30:11.023 "nvme_io": false, 00:30:11.023 "nvme_io_md": false, 00:30:11.023 "write_zeroes": true, 00:30:11.023 "zcopy": false, 00:30:11.023 "get_zone_info": false, 00:30:11.023 "zone_management": false, 00:30:11.023 "zone_append": false, 00:30:11.023 "compare": false, 00:30:11.023 "compare_and_write": false, 00:30:11.023 "abort": false, 00:30:11.023 "seek_hole": false, 00:30:11.023 "seek_data": false, 00:30:11.023 "copy": false, 00:30:11.023 "nvme_iov_md": false 00:30:11.023 }, 00:30:11.023 "memory_domains": [ 00:30:11.023 { 00:30:11.023 "dma_device_id": "system", 00:30:11.023 "dma_device_type": 1 00:30:11.023 }, 00:30:11.023 { 00:30:11.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.023 "dma_device_type": 2 00:30:11.023 }, 00:30:11.023 { 00:30:11.023 "dma_device_id": "system", 00:30:11.023 "dma_device_type": 1 00:30:11.023 }, 00:30:11.023 { 00:30:11.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.023 "dma_device_type": 2 00:30:11.023 } 00:30:11.023 ], 00:30:11.023 "driver_specific": { 00:30:11.023 "raid": { 00:30:11.023 "uuid": "70541627-6ded-4636-a550-5f214029dadc", 00:30:11.023 "strip_size_kb": 0, 00:30:11.023 "state": "online", 00:30:11.023 "raid_level": "raid1", 00:30:11.023 "superblock": true, 00:30:11.023 "num_base_bdevs": 2, 00:30:11.023 "num_base_bdevs_discovered": 2, 00:30:11.023 "num_base_bdevs_operational": 2, 00:30:11.023 "base_bdevs_list": [ 00:30:11.023 { 00:30:11.023 "name": "BaseBdev1", 00:30:11.023 "uuid": "cf7d4646-1dc8-4e83-aff4-730cceecb113", 00:30:11.023 "is_configured": true, 00:30:11.023 "data_offset": 2048, 00:30:11.023 "data_size": 63488 00:30:11.023 }, 00:30:11.023 { 00:30:11.023 "name": "BaseBdev2", 00:30:11.023 "uuid": "effbbf09-b825-46f8-8f29-56b07b34472b", 00:30:11.023 "is_configured": true, 00:30:11.023 "data_offset": 2048, 00:30:11.023 "data_size": 63488 00:30:11.023 } 00:30:11.023 ] 00:30:11.023 } 00:30:11.023 } 00:30:11.023 }' 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:11.023 BaseBdev2' 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.023 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.024 [2024-10-09 13:59:17.524269] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.024 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.283 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:11.283 "name": "Existed_Raid", 00:30:11.283 "uuid": "70541627-6ded-4636-a550-5f214029dadc", 00:30:11.283 "strip_size_kb": 0, 00:30:11.283 "state": "online", 00:30:11.283 "raid_level": "raid1", 00:30:11.283 "superblock": true, 00:30:11.283 "num_base_bdevs": 2, 00:30:11.283 "num_base_bdevs_discovered": 1, 00:30:11.283 "num_base_bdevs_operational": 1, 00:30:11.283 "base_bdevs_list": [ 00:30:11.283 { 00:30:11.283 "name": null, 00:30:11.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.283 "is_configured": false, 00:30:11.283 "data_offset": 0, 00:30:11.283 "data_size": 63488 00:30:11.283 }, 00:30:11.283 { 00:30:11.283 "name": "BaseBdev2", 00:30:11.283 "uuid": "effbbf09-b825-46f8-8f29-56b07b34472b", 00:30:11.283 "is_configured": true, 00:30:11.283 "data_offset": 2048, 00:30:11.283 "data_size": 63488 00:30:11.283 } 00:30:11.283 ] 00:30:11.283 }' 00:30:11.283 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:11.283 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.542 13:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.542 [2024-10-09 13:59:18.016616] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:11.542 [2024-10-09 13:59:18.016713] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:11.542 [2024-10-09 13:59:18.029200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:11.542 [2024-10-09 13:59:18.029250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:11.542 [2024-10-09 13:59:18.029265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74529 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74529 ']' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74529 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:11.542 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74529 00:30:11.800 killing process with pid 74529 00:30:11.800 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:11.801 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:11.801 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74529' 00:30:11.801 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74529 00:30:11.801 [2024-10-09 13:59:18.121967] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:11.801 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74529 00:30:11.801 [2024-10-09 13:59:18.123103] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:12.060 13:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:12.060 ************************************ 00:30:12.060 END TEST raid_state_function_test_sb 00:30:12.060 ************************************ 00:30:12.060 00:30:12.060 real 0m4.068s 00:30:12.060 user 0m6.449s 00:30:12.060 sys 0m0.796s 00:30:12.060 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.060 13:59:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.060 13:59:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:30:12.060 13:59:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:12.060 13:59:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.060 13:59:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:12.060 ************************************ 00:30:12.060 START TEST raid_superblock_test 00:30:12.060 ************************************ 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74770 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74770 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74770 ']' 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.060 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.060 [2024-10-09 13:59:18.514811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:12.060 [2024-10-09 13:59:18.514968] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74770 ] 00:30:12.319 [2024-10-09 13:59:18.673308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.319 [2024-10-09 13:59:18.723919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.319 [2024-10-09 13:59:18.768006] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:12.319 [2024-10-09 13:59:18.768040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.319 malloc1 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.319 [2024-10-09 13:59:18.840856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:12.319 [2024-10-09 13:59:18.841051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.319 [2024-10-09 13:59:18.841114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:12.319 [2024-10-09 13:59:18.841214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.319 [2024-10-09 13:59:18.843789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.319 [2024-10-09 13:59:18.843936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:12.319 pt1 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.319 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.578 malloc2 00:30:12.578 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.578 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:12.578 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.578 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.578 [2024-10-09 13:59:18.880243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:12.578 [2024-10-09 13:59:18.880321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:12.579 [2024-10-09 13:59:18.880348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:12.579 [2024-10-09 13:59:18.880369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:12.579 [2024-10-09 13:59:18.883509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:12.579 [2024-10-09 13:59:18.883688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:12.579 pt2 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.579 [2024-10-09 13:59:18.892383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:12.579 [2024-10-09 13:59:18.894847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:12.579 [2024-10-09 13:59:18.895095] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:30:12.579 [2024-10-09 13:59:18.895118] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:12.579 [2024-10-09 13:59:18.895402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:12.579 [2024-10-09 13:59:18.895538] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:30:12.579 [2024-10-09 13:59:18.895568] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:30:12.579 [2024-10-09 13:59:18.895712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.579 "name": "raid_bdev1", 00:30:12.579 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:12.579 "strip_size_kb": 0, 00:30:12.579 "state": "online", 00:30:12.579 "raid_level": "raid1", 00:30:12.579 "superblock": true, 00:30:12.579 "num_base_bdevs": 2, 00:30:12.579 "num_base_bdevs_discovered": 2, 00:30:12.579 "num_base_bdevs_operational": 2, 00:30:12.579 "base_bdevs_list": [ 00:30:12.579 { 00:30:12.579 "name": "pt1", 00:30:12.579 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:12.579 "is_configured": true, 00:30:12.579 "data_offset": 2048, 00:30:12.579 "data_size": 63488 00:30:12.579 }, 00:30:12.579 { 00:30:12.579 "name": "pt2", 00:30:12.579 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:12.579 "is_configured": true, 00:30:12.579 "data_offset": 2048, 00:30:12.579 "data_size": 63488 00:30:12.579 } 00:30:12.579 ] 00:30:12.579 }' 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.579 13:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.838 [2024-10-09 13:59:19.336771] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.838 "name": "raid_bdev1", 00:30:12.838 "aliases": [ 00:30:12.838 "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b" 00:30:12.838 ], 00:30:12.838 "product_name": "Raid Volume", 00:30:12.838 "block_size": 512, 00:30:12.838 "num_blocks": 63488, 00:30:12.838 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:12.838 "assigned_rate_limits": { 00:30:12.838 "rw_ios_per_sec": 0, 00:30:12.838 "rw_mbytes_per_sec": 0, 00:30:12.838 "r_mbytes_per_sec": 0, 00:30:12.838 "w_mbytes_per_sec": 0 00:30:12.838 }, 00:30:12.838 "claimed": false, 00:30:12.838 "zoned": false, 00:30:12.838 "supported_io_types": { 00:30:12.838 "read": true, 00:30:12.838 "write": true, 00:30:12.838 "unmap": false, 00:30:12.838 "flush": false, 00:30:12.838 "reset": true, 00:30:12.838 "nvme_admin": false, 00:30:12.838 "nvme_io": false, 00:30:12.838 "nvme_io_md": false, 00:30:12.838 "write_zeroes": true, 00:30:12.838 "zcopy": false, 00:30:12.838 "get_zone_info": false, 00:30:12.838 "zone_management": false, 00:30:12.838 "zone_append": false, 00:30:12.838 "compare": false, 00:30:12.838 "compare_and_write": false, 00:30:12.838 "abort": false, 00:30:12.838 "seek_hole": false, 00:30:12.838 "seek_data": false, 00:30:12.838 "copy": false, 00:30:12.838 "nvme_iov_md": false 00:30:12.838 }, 00:30:12.838 "memory_domains": [ 00:30:12.838 { 00:30:12.838 "dma_device_id": "system", 00:30:12.838 "dma_device_type": 1 00:30:12.838 }, 00:30:12.838 { 00:30:12.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.838 "dma_device_type": 2 00:30:12.838 }, 00:30:12.838 { 00:30:12.838 "dma_device_id": "system", 00:30:12.838 "dma_device_type": 1 00:30:12.838 }, 00:30:12.838 { 00:30:12.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.838 "dma_device_type": 2 00:30:12.838 } 00:30:12.838 ], 00:30:12.838 "driver_specific": { 00:30:12.838 "raid": { 00:30:12.838 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:12.838 "strip_size_kb": 0, 00:30:12.838 "state": "online", 00:30:12.838 "raid_level": "raid1", 00:30:12.838 "superblock": true, 00:30:12.838 "num_base_bdevs": 2, 00:30:12.838 "num_base_bdevs_discovered": 2, 00:30:12.838 "num_base_bdevs_operational": 2, 00:30:12.838 "base_bdevs_list": [ 00:30:12.838 { 00:30:12.838 "name": "pt1", 00:30:12.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:12.838 "is_configured": true, 00:30:12.838 "data_offset": 2048, 00:30:12.838 "data_size": 63488 00:30:12.838 }, 00:30:12.838 { 00:30:12.838 "name": "pt2", 00:30:12.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:12.838 "is_configured": true, 00:30:12.838 "data_offset": 2048, 00:30:12.838 "data_size": 63488 00:30:12.838 } 00:30:12.838 ] 00:30:12.838 } 00:30:12.838 } 00:30:12.838 }' 00:30:12.838 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:13.097 pt2' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:13.097 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:13.098 [2024-10-09 13:59:19.560745] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d90061f5-6bf1-442c-b20f-c2dcc0e5b47b 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d90061f5-6bf1-442c-b20f-c2dcc0e5b47b ']' 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 [2024-10-09 13:59:19.604487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:13.098 [2024-10-09 13:59:19.604516] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:13.098 [2024-10-09 13:59:19.604605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:13.098 [2024-10-09 13:59:19.604683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:13.098 [2024-10-09 13:59:19.604717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:13.098 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 [2024-10-09 13:59:19.732597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:13.357 [2024-10-09 13:59:19.735145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:13.357 [2024-10-09 13:59:19.735355] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:13.357 [2024-10-09 13:59:19.735431] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:13.357 [2024-10-09 13:59:19.735456] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:13.357 [2024-10-09 13:59:19.735468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:30:13.357 request: 00:30:13.357 { 00:30:13.357 "name": "raid_bdev1", 00:30:13.357 "raid_level": "raid1", 00:30:13.357 "base_bdevs": [ 00:30:13.357 "malloc1", 00:30:13.357 "malloc2" 00:30:13.357 ], 00:30:13.357 "superblock": false, 00:30:13.357 "method": "bdev_raid_create", 00:30:13.357 "req_id": 1 00:30:13.357 } 00:30:13.357 Got JSON-RPC error response 00:30:13.357 response: 00:30:13.357 { 00:30:13.357 "code": -17, 00:30:13.357 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:13.357 } 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.357 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.357 [2024-10-09 13:59:19.800556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:13.357 [2024-10-09 13:59:19.800626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:13.358 [2024-10-09 13:59:19.800652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:13.358 [2024-10-09 13:59:19.800665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:13.358 [2024-10-09 13:59:19.803451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:13.358 [2024-10-09 13:59:19.803494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:13.358 [2024-10-09 13:59:19.803595] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:13.358 [2024-10-09 13:59:19.803640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:13.358 pt1 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.358 "name": "raid_bdev1", 00:30:13.358 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:13.358 "strip_size_kb": 0, 00:30:13.358 "state": "configuring", 00:30:13.358 "raid_level": "raid1", 00:30:13.358 "superblock": true, 00:30:13.358 "num_base_bdevs": 2, 00:30:13.358 "num_base_bdevs_discovered": 1, 00:30:13.358 "num_base_bdevs_operational": 2, 00:30:13.358 "base_bdevs_list": [ 00:30:13.358 { 00:30:13.358 "name": "pt1", 00:30:13.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:13.358 "is_configured": true, 00:30:13.358 "data_offset": 2048, 00:30:13.358 "data_size": 63488 00:30:13.358 }, 00:30:13.358 { 00:30:13.358 "name": null, 00:30:13.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:13.358 "is_configured": false, 00:30:13.358 "data_offset": 2048, 00:30:13.358 "data_size": 63488 00:30:13.358 } 00:30:13.358 ] 00:30:13.358 }' 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.358 13:59:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.924 [2024-10-09 13:59:20.256676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:13.924 [2024-10-09 13:59:20.256749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:13.924 [2024-10-09 13:59:20.256780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:13.924 [2024-10-09 13:59:20.256793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:13.924 [2024-10-09 13:59:20.257221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:13.924 [2024-10-09 13:59:20.257239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:13.924 [2024-10-09 13:59:20.257317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:13.924 [2024-10-09 13:59:20.257339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:13.924 [2024-10-09 13:59:20.257436] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:13.924 [2024-10-09 13:59:20.257447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:13.924 [2024-10-09 13:59:20.257716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:13.924 [2024-10-09 13:59:20.257834] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:13.924 [2024-10-09 13:59:20.257851] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:30:13.924 [2024-10-09 13:59:20.257955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:13.924 pt2 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.924 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:13.925 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.925 "name": "raid_bdev1", 00:30:13.925 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:13.925 "strip_size_kb": 0, 00:30:13.925 "state": "online", 00:30:13.925 "raid_level": "raid1", 00:30:13.925 "superblock": true, 00:30:13.925 "num_base_bdevs": 2, 00:30:13.925 "num_base_bdevs_discovered": 2, 00:30:13.925 "num_base_bdevs_operational": 2, 00:30:13.925 "base_bdevs_list": [ 00:30:13.925 { 00:30:13.925 "name": "pt1", 00:30:13.925 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:13.925 "is_configured": true, 00:30:13.925 "data_offset": 2048, 00:30:13.925 "data_size": 63488 00:30:13.925 }, 00:30:13.925 { 00:30:13.925 "name": "pt2", 00:30:13.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:13.925 "is_configured": true, 00:30:13.925 "data_offset": 2048, 00:30:13.925 "data_size": 63488 00:30:13.925 } 00:30:13.925 ] 00:30:13.925 }' 00:30:13.925 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.925 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.182 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.182 [2024-10-09 13:59:20.729041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:14.440 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:14.441 "name": "raid_bdev1", 00:30:14.441 "aliases": [ 00:30:14.441 "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b" 00:30:14.441 ], 00:30:14.441 "product_name": "Raid Volume", 00:30:14.441 "block_size": 512, 00:30:14.441 "num_blocks": 63488, 00:30:14.441 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:14.441 "assigned_rate_limits": { 00:30:14.441 "rw_ios_per_sec": 0, 00:30:14.441 "rw_mbytes_per_sec": 0, 00:30:14.441 "r_mbytes_per_sec": 0, 00:30:14.441 "w_mbytes_per_sec": 0 00:30:14.441 }, 00:30:14.441 "claimed": false, 00:30:14.441 "zoned": false, 00:30:14.441 "supported_io_types": { 00:30:14.441 "read": true, 00:30:14.441 "write": true, 00:30:14.441 "unmap": false, 00:30:14.441 "flush": false, 00:30:14.441 "reset": true, 00:30:14.441 "nvme_admin": false, 00:30:14.441 "nvme_io": false, 00:30:14.441 "nvme_io_md": false, 00:30:14.441 "write_zeroes": true, 00:30:14.441 "zcopy": false, 00:30:14.441 "get_zone_info": false, 00:30:14.441 "zone_management": false, 00:30:14.441 "zone_append": false, 00:30:14.441 "compare": false, 00:30:14.441 "compare_and_write": false, 00:30:14.441 "abort": false, 00:30:14.441 "seek_hole": false, 00:30:14.441 "seek_data": false, 00:30:14.441 "copy": false, 00:30:14.441 "nvme_iov_md": false 00:30:14.441 }, 00:30:14.441 "memory_domains": [ 00:30:14.441 { 00:30:14.441 "dma_device_id": "system", 00:30:14.441 "dma_device_type": 1 00:30:14.441 }, 00:30:14.441 { 00:30:14.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.441 "dma_device_type": 2 00:30:14.441 }, 00:30:14.441 { 00:30:14.441 "dma_device_id": "system", 00:30:14.441 "dma_device_type": 1 00:30:14.441 }, 00:30:14.441 { 00:30:14.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.441 "dma_device_type": 2 00:30:14.441 } 00:30:14.441 ], 00:30:14.441 "driver_specific": { 00:30:14.441 "raid": { 00:30:14.441 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:14.441 "strip_size_kb": 0, 00:30:14.441 "state": "online", 00:30:14.441 "raid_level": "raid1", 00:30:14.441 "superblock": true, 00:30:14.441 "num_base_bdevs": 2, 00:30:14.441 "num_base_bdevs_discovered": 2, 00:30:14.441 "num_base_bdevs_operational": 2, 00:30:14.441 "base_bdevs_list": [ 00:30:14.441 { 00:30:14.441 "name": "pt1", 00:30:14.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:14.441 "is_configured": true, 00:30:14.441 "data_offset": 2048, 00:30:14.441 "data_size": 63488 00:30:14.441 }, 00:30:14.441 { 00:30:14.441 "name": "pt2", 00:30:14.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:14.441 "is_configured": true, 00:30:14.441 "data_offset": 2048, 00:30:14.441 "data_size": 63488 00:30:14.441 } 00:30:14.441 ] 00:30:14.441 } 00:30:14.441 } 00:30:14.441 }' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:14.441 pt2' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.441 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:14.700 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:14.700 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:14.700 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.700 13:59:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.700 13:59:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:14.700 [2024-10-09 13:59:20.997081] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d90061f5-6bf1-442c-b20f-c2dcc0e5b47b '!=' d90061f5-6bf1-442c-b20f-c2dcc0e5b47b ']' 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.700 [2024-10-09 13:59:21.040855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.700 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.701 "name": "raid_bdev1", 00:30:14.701 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:14.701 "strip_size_kb": 0, 00:30:14.701 "state": "online", 00:30:14.701 "raid_level": "raid1", 00:30:14.701 "superblock": true, 00:30:14.701 "num_base_bdevs": 2, 00:30:14.701 "num_base_bdevs_discovered": 1, 00:30:14.701 "num_base_bdevs_operational": 1, 00:30:14.701 "base_bdevs_list": [ 00:30:14.701 { 00:30:14.701 "name": null, 00:30:14.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.701 "is_configured": false, 00:30:14.701 "data_offset": 0, 00:30:14.701 "data_size": 63488 00:30:14.701 }, 00:30:14.701 { 00:30:14.701 "name": "pt2", 00:30:14.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:14.701 "is_configured": true, 00:30:14.701 "data_offset": 2048, 00:30:14.701 "data_size": 63488 00:30:14.701 } 00:30:14.701 ] 00:30:14.701 }' 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.701 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.959 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:14.959 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:14.959 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.959 [2024-10-09 13:59:21.504902] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:14.959 [2024-10-09 13:59:21.505080] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:14.959 [2024-10-09 13:59:21.505254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:14.959 [2024-10-09 13:59:21.505344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:14.959 [2024-10-09 13:59:21.505585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.218 [2024-10-09 13:59:21.564906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:15.218 [2024-10-09 13:59:21.564962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.218 [2024-10-09 13:59:21.564985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:15.218 [2024-10-09 13:59:21.564998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.218 [2024-10-09 13:59:21.567681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.218 [2024-10-09 13:59:21.567718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:15.218 [2024-10-09 13:59:21.567797] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:15.218 [2024-10-09 13:59:21.567829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:15.218 [2024-10-09 13:59:21.567905] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:30:15.218 [2024-10-09 13:59:21.567914] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:15.218 [2024-10-09 13:59:21.568172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:15.218 [2024-10-09 13:59:21.568300] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:30:15.218 [2024-10-09 13:59:21.568315] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:30:15.218 [2024-10-09 13:59:21.568422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.218 pt2 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.218 "name": "raid_bdev1", 00:30:15.218 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:15.218 "strip_size_kb": 0, 00:30:15.218 "state": "online", 00:30:15.218 "raid_level": "raid1", 00:30:15.218 "superblock": true, 00:30:15.218 "num_base_bdevs": 2, 00:30:15.218 "num_base_bdevs_discovered": 1, 00:30:15.218 "num_base_bdevs_operational": 1, 00:30:15.218 "base_bdevs_list": [ 00:30:15.218 { 00:30:15.218 "name": null, 00:30:15.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.218 "is_configured": false, 00:30:15.218 "data_offset": 2048, 00:30:15.218 "data_size": 63488 00:30:15.218 }, 00:30:15.218 { 00:30:15.218 "name": "pt2", 00:30:15.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:15.218 "is_configured": true, 00:30:15.218 "data_offset": 2048, 00:30:15.218 "data_size": 63488 00:30:15.218 } 00:30:15.218 ] 00:30:15.218 }' 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.218 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.477 13:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:15.477 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.477 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.477 [2024-10-09 13:59:21.997064] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:15.477 [2024-10-09 13:59:21.997095] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:15.477 [2024-10-09 13:59:21.997173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.477 [2024-10-09 13:59:21.997222] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.477 [2024-10-09 13:59:21.997236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:30:15.477 13:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.477 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.477 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:30:15.477 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.477 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.477 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.736 [2024-10-09 13:59:22.053033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:15.736 [2024-10-09 13:59:22.053206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.736 [2024-10-09 13:59:22.053241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:30:15.736 [2024-10-09 13:59:22.053261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.736 [2024-10-09 13:59:22.055946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.736 [2024-10-09 13:59:22.055991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:15.736 [2024-10-09 13:59:22.056068] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:15.736 [2024-10-09 13:59:22.056110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:15.736 [2024-10-09 13:59:22.056207] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:30:15.736 [2024-10-09 13:59:22.056222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:15.736 [2024-10-09 13:59:22.056240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:30:15.736 [2024-10-09 13:59:22.056284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:15.736 [2024-10-09 13:59:22.056352] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:30:15.736 [2024-10-09 13:59:22.056408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:15.736 [2024-10-09 13:59:22.056667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:15.736 [2024-10-09 13:59:22.056785] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:30:15.736 [2024-10-09 13:59:22.056796] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:30:15.736 [2024-10-09 13:59:22.056918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.736 pt1 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:15.736 "name": "raid_bdev1", 00:30:15.736 "uuid": "d90061f5-6bf1-442c-b20f-c2dcc0e5b47b", 00:30:15.736 "strip_size_kb": 0, 00:30:15.736 "state": "online", 00:30:15.736 "raid_level": "raid1", 00:30:15.736 "superblock": true, 00:30:15.736 "num_base_bdevs": 2, 00:30:15.736 "num_base_bdevs_discovered": 1, 00:30:15.736 "num_base_bdevs_operational": 1, 00:30:15.736 "base_bdevs_list": [ 00:30:15.736 { 00:30:15.736 "name": null, 00:30:15.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.736 "is_configured": false, 00:30:15.736 "data_offset": 2048, 00:30:15.736 "data_size": 63488 00:30:15.736 }, 00:30:15.736 { 00:30:15.736 "name": "pt2", 00:30:15.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:15.736 "is_configured": true, 00:30:15.736 "data_offset": 2048, 00:30:15.736 "data_size": 63488 00:30:15.736 } 00:30:15.736 ] 00:30:15.736 }' 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:15.736 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.995 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.253 [2024-10-09 13:59:22.545444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d90061f5-6bf1-442c-b20f-c2dcc0e5b47b '!=' d90061f5-6bf1-442c-b20f-c2dcc0e5b47b ']' 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74770 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74770 ']' 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74770 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74770 00:30:16.253 killing process with pid 74770 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74770' 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74770 00:30:16.253 [2024-10-09 13:59:22.626503] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:16.253 [2024-10-09 13:59:22.626614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:16.253 [2024-10-09 13:59:22.626668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:16.253 [2024-10-09 13:59:22.626680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:30:16.253 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74770 00:30:16.253 [2024-10-09 13:59:22.651792] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:16.512 13:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:16.512 00:30:16.512 real 0m4.461s 00:30:16.512 user 0m7.632s 00:30:16.512 sys 0m1.012s 00:30:16.512 ************************************ 00:30:16.512 END TEST raid_superblock_test 00:30:16.512 ************************************ 00:30:16.512 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:16.512 13:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.512 13:59:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:30:16.512 13:59:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:16.512 13:59:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.512 13:59:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:16.512 ************************************ 00:30:16.512 START TEST raid_read_error_test 00:30:16.512 ************************************ 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uJNVUtw3Wd 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75086 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75086 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75086 ']' 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.512 13:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:16.771 [2024-10-09 13:59:23.089832] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:16.771 [2024-10-09 13:59:23.090096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75086 ] 00:30:16.771 [2024-10-09 13:59:23.291196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.030 [2024-10-09 13:59:23.337010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.030 [2024-10-09 13:59:23.381933] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.030 [2024-10-09 13:59:23.381971] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 BaseBdev1_malloc 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 true 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 [2024-10-09 13:59:23.987129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:17.598 [2024-10-09 13:59:23.987184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.598 [2024-10-09 13:59:23.987228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:17.598 [2024-10-09 13:59:23.987253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.598 [2024-10-09 13:59:23.989809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.598 [2024-10-09 13:59:23.989851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:17.598 BaseBdev1 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 BaseBdev2_malloc 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 true 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 [2024-10-09 13:59:24.042440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:17.598 [2024-10-09 13:59:24.042492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:17.598 [2024-10-09 13:59:24.042513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:17.598 [2024-10-09 13:59:24.042525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:17.598 [2024-10-09 13:59:24.044958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:17.598 [2024-10-09 13:59:24.044996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:17.598 BaseBdev2 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.598 [2024-10-09 13:59:24.054473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:17.598 [2024-10-09 13:59:24.056714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:17.598 [2024-10-09 13:59:24.056882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:17.598 [2024-10-09 13:59:24.056895] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:17.598 [2024-10-09 13:59:24.057176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:17.598 [2024-10-09 13:59:24.057311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:17.598 [2024-10-09 13:59:24.057325] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:30:17.598 [2024-10-09 13:59:24.057448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:17.598 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:17.599 "name": "raid_bdev1", 00:30:17.599 "uuid": "fcf7928d-7e23-45f4-89b2-f58495cc3b2f", 00:30:17.599 "strip_size_kb": 0, 00:30:17.599 "state": "online", 00:30:17.599 "raid_level": "raid1", 00:30:17.599 "superblock": true, 00:30:17.599 "num_base_bdevs": 2, 00:30:17.599 "num_base_bdevs_discovered": 2, 00:30:17.599 "num_base_bdevs_operational": 2, 00:30:17.599 "base_bdevs_list": [ 00:30:17.599 { 00:30:17.599 "name": "BaseBdev1", 00:30:17.599 "uuid": "c5849f71-b904-5b86-a5de-878cad727607", 00:30:17.599 "is_configured": true, 00:30:17.599 "data_offset": 2048, 00:30:17.599 "data_size": 63488 00:30:17.599 }, 00:30:17.599 { 00:30:17.599 "name": "BaseBdev2", 00:30:17.599 "uuid": "f7616afa-a21c-5bc7-8016-87662f97bcac", 00:30:17.599 "is_configured": true, 00:30:17.599 "data_offset": 2048, 00:30:17.599 "data_size": 63488 00:30:17.599 } 00:30:17.599 ] 00:30:17.599 }' 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:17.599 13:59:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.166 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:18.166 13:59:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:18.166 [2024-10-09 13:59:24.587405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:19.102 "name": "raid_bdev1", 00:30:19.102 "uuid": "fcf7928d-7e23-45f4-89b2-f58495cc3b2f", 00:30:19.102 "strip_size_kb": 0, 00:30:19.102 "state": "online", 00:30:19.102 "raid_level": "raid1", 00:30:19.102 "superblock": true, 00:30:19.102 "num_base_bdevs": 2, 00:30:19.102 "num_base_bdevs_discovered": 2, 00:30:19.102 "num_base_bdevs_operational": 2, 00:30:19.102 "base_bdevs_list": [ 00:30:19.102 { 00:30:19.102 "name": "BaseBdev1", 00:30:19.102 "uuid": "c5849f71-b904-5b86-a5de-878cad727607", 00:30:19.102 "is_configured": true, 00:30:19.102 "data_offset": 2048, 00:30:19.102 "data_size": 63488 00:30:19.102 }, 00:30:19.102 { 00:30:19.102 "name": "BaseBdev2", 00:30:19.102 "uuid": "f7616afa-a21c-5bc7-8016-87662f97bcac", 00:30:19.102 "is_configured": true, 00:30:19.102 "data_offset": 2048, 00:30:19.102 "data_size": 63488 00:30:19.102 } 00:30:19.102 ] 00:30:19.102 }' 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:19.102 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.361 [2024-10-09 13:59:25.888452] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:19.361 [2024-10-09 13:59:25.888510] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:19.361 [2024-10-09 13:59:25.891280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:19.361 [2024-10-09 13:59:25.891339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:19.361 [2024-10-09 13:59:25.891451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:19.361 [2024-10-09 13:59:25.891467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:30:19.361 { 00:30:19.361 "results": [ 00:30:19.361 { 00:30:19.361 "job": "raid_bdev1", 00:30:19.361 "core_mask": "0x1", 00:30:19.361 "workload": "randrw", 00:30:19.361 "percentage": 50, 00:30:19.361 "status": "finished", 00:30:19.361 "queue_depth": 1, 00:30:19.361 "io_size": 131072, 00:30:19.361 "runtime": 1.297235, 00:30:19.361 "iops": 15237.794231577162, 00:30:19.361 "mibps": 1904.7242789471452, 00:30:19.361 "io_failed": 0, 00:30:19.361 "io_timeout": 0, 00:30:19.361 "avg_latency_us": 62.69512814768241, 00:30:19.361 "min_latency_us": 23.283809523809524, 00:30:19.361 "max_latency_us": 1513.5695238095238 00:30:19.361 } 00:30:19.361 ], 00:30:19.361 "core_count": 1 00:30:19.361 } 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75086 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75086 ']' 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75086 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:19.361 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75086 00:30:19.619 killing process with pid 75086 00:30:19.619 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:19.619 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:19.619 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75086' 00:30:19.619 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75086 00:30:19.619 [2024-10-09 13:59:25.937614] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:19.619 13:59:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75086 00:30:19.619 [2024-10-09 13:59:25.966699] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uJNVUtw3Wd 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:30:19.877 ************************************ 00:30:19.877 END TEST raid_read_error_test 00:30:19.877 ************************************ 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:30:19.877 00:30:19.877 real 0m3.398s 00:30:19.877 user 0m4.245s 00:30:19.877 sys 0m0.571s 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.877 13:59:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.877 13:59:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:30:19.877 13:59:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:19.877 13:59:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.877 13:59:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:19.877 ************************************ 00:30:19.877 START TEST raid_write_error_test 00:30:19.877 ************************************ 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:30:19.877 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9282Izq1h2 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75216 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75216 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75216 ']' 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.136 13:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.136 [2024-10-09 13:59:26.526640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:20.136 [2024-10-09 13:59:26.526806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75216 ] 00:30:20.394 [2024-10-09 13:59:26.691847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.394 [2024-10-09 13:59:26.771391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.394 [2024-10-09 13:59:26.851850] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:20.394 [2024-10-09 13:59:26.851906] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.962 BaseBdev1_malloc 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.962 true 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.962 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 [2024-10-09 13:59:27.514751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:21.221 [2024-10-09 13:59:27.515017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.221 [2024-10-09 13:59:27.515060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:21.221 [2024-10-09 13:59:27.515077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.221 [2024-10-09 13:59:27.518171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.221 [2024-10-09 13:59:27.518351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:21.221 BaseBdev1 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 BaseBdev2_malloc 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 true 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 [2024-10-09 13:59:27.560121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:21.221 [2024-10-09 13:59:27.560198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.221 [2024-10-09 13:59:27.560224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:21.221 [2024-10-09 13:59:27.560239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.221 [2024-10-09 13:59:27.563331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.221 [2024-10-09 13:59:27.563378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:21.221 BaseBdev2 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 [2024-10-09 13:59:27.568269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:21.221 [2024-10-09 13:59:27.570974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:21.221 [2024-10-09 13:59:27.571182] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:21.221 [2024-10-09 13:59:27.571199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:21.221 [2024-10-09 13:59:27.571517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:30:21.221 [2024-10-09 13:59:27.571722] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:21.221 [2024-10-09 13:59:27.571741] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:30:21.221 [2024-10-09 13:59:27.571885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:21.221 "name": "raid_bdev1", 00:30:21.221 "uuid": "34576877-cf76-48fd-abe1-bf69cff7cacf", 00:30:21.221 "strip_size_kb": 0, 00:30:21.221 "state": "online", 00:30:21.221 "raid_level": "raid1", 00:30:21.221 "superblock": true, 00:30:21.221 "num_base_bdevs": 2, 00:30:21.221 "num_base_bdevs_discovered": 2, 00:30:21.221 "num_base_bdevs_operational": 2, 00:30:21.221 "base_bdevs_list": [ 00:30:21.221 { 00:30:21.221 "name": "BaseBdev1", 00:30:21.221 "uuid": "ecff9585-5c5b-52e2-8cce-5ce11701fe76", 00:30:21.221 "is_configured": true, 00:30:21.221 "data_offset": 2048, 00:30:21.221 "data_size": 63488 00:30:21.221 }, 00:30:21.221 { 00:30:21.221 "name": "BaseBdev2", 00:30:21.221 "uuid": "b1bf69c5-532f-5cb4-96f3-76987add12e4", 00:30:21.221 "is_configured": true, 00:30:21.221 "data_offset": 2048, 00:30:21.221 "data_size": 63488 00:30:21.221 } 00:30:21.221 ] 00:30:21.221 }' 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:21.221 13:59:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.521 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:21.521 13:59:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:21.779 [2024-10-09 13:59:28.117034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:22.715 13:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:22.715 13:59:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.715 13:59:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 [2024-10-09 13:59:29.006355] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:30:22.715 [2024-10-09 13:59:29.006443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:22.715 [2024-10-09 13:59:29.006707] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.715 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:22.715 "name": "raid_bdev1", 00:30:22.716 "uuid": "34576877-cf76-48fd-abe1-bf69cff7cacf", 00:30:22.716 "strip_size_kb": 0, 00:30:22.716 "state": "online", 00:30:22.716 "raid_level": "raid1", 00:30:22.716 "superblock": true, 00:30:22.716 "num_base_bdevs": 2, 00:30:22.716 "num_base_bdevs_discovered": 1, 00:30:22.716 "num_base_bdevs_operational": 1, 00:30:22.716 "base_bdevs_list": [ 00:30:22.716 { 00:30:22.716 "name": null, 00:30:22.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.716 "is_configured": false, 00:30:22.716 "data_offset": 0, 00:30:22.716 "data_size": 63488 00:30:22.716 }, 00:30:22.716 { 00:30:22.716 "name": "BaseBdev2", 00:30:22.716 "uuid": "b1bf69c5-532f-5cb4-96f3-76987add12e4", 00:30:22.716 "is_configured": true, 00:30:22.716 "data_offset": 2048, 00:30:22.716 "data_size": 63488 00:30:22.716 } 00:30:22.716 ] 00:30:22.716 }' 00:30:22.716 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:22.716 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.975 [2024-10-09 13:59:29.452691] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:22.975 [2024-10-09 13:59:29.452741] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:22.975 [2024-10-09 13:59:29.456771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:22.975 [2024-10-09 13:59:29.456910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:22.975 [2024-10-09 13:59:29.456996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:22.975 [2024-10-09 13:59:29.457018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:30:22.975 { 00:30:22.975 "results": [ 00:30:22.975 { 00:30:22.975 "job": "raid_bdev1", 00:30:22.975 "core_mask": "0x1", 00:30:22.975 "workload": "randrw", 00:30:22.975 "percentage": 50, 00:30:22.975 "status": "finished", 00:30:22.975 "queue_depth": 1, 00:30:22.975 "io_size": 131072, 00:30:22.975 "runtime": 1.332946, 00:30:22.975 "iops": 16477.036579126237, 00:30:22.975 "mibps": 2059.6295723907797, 00:30:22.975 "io_failed": 0, 00:30:22.975 "io_timeout": 0, 00:30:22.975 "avg_latency_us": 57.71581313160878, 00:30:22.975 "min_latency_us": 24.746666666666666, 00:30:22.975 "max_latency_us": 1490.1638095238095 00:30:22.975 } 00:30:22.975 ], 00:30:22.975 "core_count": 1 00:30:22.975 } 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75216 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75216 ']' 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75216 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75216 00:30:22.975 killing process with pid 75216 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75216' 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75216 00:30:22.975 [2024-10-09 13:59:29.501127] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:22.975 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75216 00:30:22.975 [2024-10-09 13:59:29.518813] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:23.234 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9282Izq1h2 00:30:23.234 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:23.234 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:30:23.494 00:30:23.494 real 0m3.374s 00:30:23.494 user 0m4.205s 00:30:23.494 sys 0m0.671s 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.494 13:59:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.494 ************************************ 00:30:23.494 END TEST raid_write_error_test 00:30:23.494 ************************************ 00:30:23.494 13:59:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:30:23.494 13:59:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:30:23.494 13:59:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:30:23.494 13:59:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:23.494 13:59:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.494 13:59:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:23.494 ************************************ 00:30:23.494 START TEST raid_state_function_test 00:30:23.494 ************************************ 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75343 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:23.494 Process raid pid: 75343 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75343' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75343 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75343 ']' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:23.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:23.494 13:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.494 [2024-10-09 13:59:29.949587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:23.494 [2024-10-09 13:59:29.949798] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.754 [2024-10-09 13:59:30.137996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.754 [2024-10-09 13:59:30.190021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.754 [2024-10-09 13:59:30.240654] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:23.754 [2024-10-09 13:59:30.240714] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.690 [2024-10-09 13:59:30.914178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:24.690 [2024-10-09 13:59:30.914247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:24.690 [2024-10-09 13:59:30.914266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:24.690 [2024-10-09 13:59:30.914280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:24.690 [2024-10-09 13:59:30.914289] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:24.690 [2024-10-09 13:59:30.914305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.690 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.690 "name": "Existed_Raid", 00:30:24.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.690 "strip_size_kb": 64, 00:30:24.690 "state": "configuring", 00:30:24.690 "raid_level": "raid0", 00:30:24.690 "superblock": false, 00:30:24.690 "num_base_bdevs": 3, 00:30:24.690 "num_base_bdevs_discovered": 0, 00:30:24.690 "num_base_bdevs_operational": 3, 00:30:24.690 "base_bdevs_list": [ 00:30:24.690 { 00:30:24.690 "name": "BaseBdev1", 00:30:24.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.690 "is_configured": false, 00:30:24.690 "data_offset": 0, 00:30:24.690 "data_size": 0 00:30:24.690 }, 00:30:24.690 { 00:30:24.690 "name": "BaseBdev2", 00:30:24.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.690 "is_configured": false, 00:30:24.690 "data_offset": 0, 00:30:24.690 "data_size": 0 00:30:24.691 }, 00:30:24.691 { 00:30:24.691 "name": "BaseBdev3", 00:30:24.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.691 "is_configured": false, 00:30:24.691 "data_offset": 0, 00:30:24.691 "data_size": 0 00:30:24.691 } 00:30:24.691 ] 00:30:24.691 }' 00:30:24.691 13:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.691 13:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.949 [2024-10-09 13:59:31.350177] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:24.949 [2024-10-09 13:59:31.350222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.949 [2024-10-09 13:59:31.358216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:24.949 [2024-10-09 13:59:31.358256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:24.949 [2024-10-09 13:59:31.358266] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:24.949 [2024-10-09 13:59:31.358279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:24.949 [2024-10-09 13:59:31.358287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:24.949 [2024-10-09 13:59:31.358299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.949 [2024-10-09 13:59:31.375635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.949 BaseBdev1 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:24.949 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.950 [ 00:30:24.950 { 00:30:24.950 "name": "BaseBdev1", 00:30:24.950 "aliases": [ 00:30:24.950 "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f" 00:30:24.950 ], 00:30:24.950 "product_name": "Malloc disk", 00:30:24.950 "block_size": 512, 00:30:24.950 "num_blocks": 65536, 00:30:24.950 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:24.950 "assigned_rate_limits": { 00:30:24.950 "rw_ios_per_sec": 0, 00:30:24.950 "rw_mbytes_per_sec": 0, 00:30:24.950 "r_mbytes_per_sec": 0, 00:30:24.950 "w_mbytes_per_sec": 0 00:30:24.950 }, 00:30:24.950 "claimed": true, 00:30:24.950 "claim_type": "exclusive_write", 00:30:24.950 "zoned": false, 00:30:24.950 "supported_io_types": { 00:30:24.950 "read": true, 00:30:24.950 "write": true, 00:30:24.950 "unmap": true, 00:30:24.950 "flush": true, 00:30:24.950 "reset": true, 00:30:24.950 "nvme_admin": false, 00:30:24.950 "nvme_io": false, 00:30:24.950 "nvme_io_md": false, 00:30:24.950 "write_zeroes": true, 00:30:24.950 "zcopy": true, 00:30:24.950 "get_zone_info": false, 00:30:24.950 "zone_management": false, 00:30:24.950 "zone_append": false, 00:30:24.950 "compare": false, 00:30:24.950 "compare_and_write": false, 00:30:24.950 "abort": true, 00:30:24.950 "seek_hole": false, 00:30:24.950 "seek_data": false, 00:30:24.950 "copy": true, 00:30:24.950 "nvme_iov_md": false 00:30:24.950 }, 00:30:24.950 "memory_domains": [ 00:30:24.950 { 00:30:24.950 "dma_device_id": "system", 00:30:24.950 "dma_device_type": 1 00:30:24.950 }, 00:30:24.950 { 00:30:24.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:24.950 "dma_device_type": 2 00:30:24.950 } 00:30:24.950 ], 00:30:24.950 "driver_specific": {} 00:30:24.950 } 00:30:24.950 ] 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.950 "name": "Existed_Raid", 00:30:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.950 "strip_size_kb": 64, 00:30:24.950 "state": "configuring", 00:30:24.950 "raid_level": "raid0", 00:30:24.950 "superblock": false, 00:30:24.950 "num_base_bdevs": 3, 00:30:24.950 "num_base_bdevs_discovered": 1, 00:30:24.950 "num_base_bdevs_operational": 3, 00:30:24.950 "base_bdevs_list": [ 00:30:24.950 { 00:30:24.950 "name": "BaseBdev1", 00:30:24.950 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:24.950 "is_configured": true, 00:30:24.950 "data_offset": 0, 00:30:24.950 "data_size": 65536 00:30:24.950 }, 00:30:24.950 { 00:30:24.950 "name": "BaseBdev2", 00:30:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.950 "is_configured": false, 00:30:24.950 "data_offset": 0, 00:30:24.950 "data_size": 0 00:30:24.950 }, 00:30:24.950 { 00:30:24.950 "name": "BaseBdev3", 00:30:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.950 "is_configured": false, 00:30:24.950 "data_offset": 0, 00:30:24.950 "data_size": 0 00:30:24.950 } 00:30:24.950 ] 00:30:24.950 }' 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.950 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.517 [2024-10-09 13:59:31.835784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:25.517 [2024-10-09 13:59:31.835838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.517 [2024-10-09 13:59:31.843809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:25.517 [2024-10-09 13:59:31.846031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:25.517 [2024-10-09 13:59:31.846074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:25.517 [2024-10-09 13:59:31.846085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:25.517 [2024-10-09 13:59:31.846099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.517 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.517 "name": "Existed_Raid", 00:30:25.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.517 "strip_size_kb": 64, 00:30:25.517 "state": "configuring", 00:30:25.517 "raid_level": "raid0", 00:30:25.517 "superblock": false, 00:30:25.517 "num_base_bdevs": 3, 00:30:25.517 "num_base_bdevs_discovered": 1, 00:30:25.517 "num_base_bdevs_operational": 3, 00:30:25.517 "base_bdevs_list": [ 00:30:25.517 { 00:30:25.517 "name": "BaseBdev1", 00:30:25.517 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:25.517 "is_configured": true, 00:30:25.517 "data_offset": 0, 00:30:25.517 "data_size": 65536 00:30:25.517 }, 00:30:25.517 { 00:30:25.517 "name": "BaseBdev2", 00:30:25.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.517 "is_configured": false, 00:30:25.517 "data_offset": 0, 00:30:25.517 "data_size": 0 00:30:25.517 }, 00:30:25.518 { 00:30:25.518 "name": "BaseBdev3", 00:30:25.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.518 "is_configured": false, 00:30:25.518 "data_offset": 0, 00:30:25.518 "data_size": 0 00:30:25.518 } 00:30:25.518 ] 00:30:25.518 }' 00:30:25.518 13:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.518 13:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.776 [2024-10-09 13:59:32.313452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:25.776 BaseBdev2 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.776 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.035 [ 00:30:26.035 { 00:30:26.035 "name": "BaseBdev2", 00:30:26.035 "aliases": [ 00:30:26.035 "8fd6b5a4-7215-451b-9722-75f93a248986" 00:30:26.035 ], 00:30:26.035 "product_name": "Malloc disk", 00:30:26.035 "block_size": 512, 00:30:26.035 "num_blocks": 65536, 00:30:26.035 "uuid": "8fd6b5a4-7215-451b-9722-75f93a248986", 00:30:26.035 "assigned_rate_limits": { 00:30:26.035 "rw_ios_per_sec": 0, 00:30:26.035 "rw_mbytes_per_sec": 0, 00:30:26.035 "r_mbytes_per_sec": 0, 00:30:26.035 "w_mbytes_per_sec": 0 00:30:26.035 }, 00:30:26.035 "claimed": true, 00:30:26.035 "claim_type": "exclusive_write", 00:30:26.035 "zoned": false, 00:30:26.035 "supported_io_types": { 00:30:26.035 "read": true, 00:30:26.035 "write": true, 00:30:26.035 "unmap": true, 00:30:26.035 "flush": true, 00:30:26.035 "reset": true, 00:30:26.035 "nvme_admin": false, 00:30:26.035 "nvme_io": false, 00:30:26.035 "nvme_io_md": false, 00:30:26.035 "write_zeroes": true, 00:30:26.035 "zcopy": true, 00:30:26.035 "get_zone_info": false, 00:30:26.035 "zone_management": false, 00:30:26.035 "zone_append": false, 00:30:26.035 "compare": false, 00:30:26.035 "compare_and_write": false, 00:30:26.035 "abort": true, 00:30:26.035 "seek_hole": false, 00:30:26.035 "seek_data": false, 00:30:26.035 "copy": true, 00:30:26.035 "nvme_iov_md": false 00:30:26.035 }, 00:30:26.035 "memory_domains": [ 00:30:26.035 { 00:30:26.035 "dma_device_id": "system", 00:30:26.035 "dma_device_type": 1 00:30:26.035 }, 00:30:26.035 { 00:30:26.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.035 "dma_device_type": 2 00:30:26.035 } 00:30:26.035 ], 00:30:26.035 "driver_specific": {} 00:30:26.035 } 00:30:26.035 ] 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.035 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.036 "name": "Existed_Raid", 00:30:26.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.036 "strip_size_kb": 64, 00:30:26.036 "state": "configuring", 00:30:26.036 "raid_level": "raid0", 00:30:26.036 "superblock": false, 00:30:26.036 "num_base_bdevs": 3, 00:30:26.036 "num_base_bdevs_discovered": 2, 00:30:26.036 "num_base_bdevs_operational": 3, 00:30:26.036 "base_bdevs_list": [ 00:30:26.036 { 00:30:26.036 "name": "BaseBdev1", 00:30:26.036 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:26.036 "is_configured": true, 00:30:26.036 "data_offset": 0, 00:30:26.036 "data_size": 65536 00:30:26.036 }, 00:30:26.036 { 00:30:26.036 "name": "BaseBdev2", 00:30:26.036 "uuid": "8fd6b5a4-7215-451b-9722-75f93a248986", 00:30:26.036 "is_configured": true, 00:30:26.036 "data_offset": 0, 00:30:26.036 "data_size": 65536 00:30:26.036 }, 00:30:26.036 { 00:30:26.036 "name": "BaseBdev3", 00:30:26.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.036 "is_configured": false, 00:30:26.036 "data_offset": 0, 00:30:26.036 "data_size": 0 00:30:26.036 } 00:30:26.036 ] 00:30:26.036 }' 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.036 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.295 [2024-10-09 13:59:32.820716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:26.295 [2024-10-09 13:59:32.820917] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:26.295 [2024-10-09 13:59:32.820969] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:26.295 [2024-10-09 13:59:32.821386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:26.295 [2024-10-09 13:59:32.821545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:26.295 [2024-10-09 13:59:32.821558] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:30:26.295 [2024-10-09 13:59:32.821782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.295 BaseBdev3 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.295 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.295 [ 00:30:26.295 { 00:30:26.295 "name": "BaseBdev3", 00:30:26.295 "aliases": [ 00:30:26.295 "97ec66d2-2507-4ca0-824a-7395624c64a9" 00:30:26.296 ], 00:30:26.296 "product_name": "Malloc disk", 00:30:26.296 "block_size": 512, 00:30:26.296 "num_blocks": 65536, 00:30:26.296 "uuid": "97ec66d2-2507-4ca0-824a-7395624c64a9", 00:30:26.296 "assigned_rate_limits": { 00:30:26.555 "rw_ios_per_sec": 0, 00:30:26.555 "rw_mbytes_per_sec": 0, 00:30:26.555 "r_mbytes_per_sec": 0, 00:30:26.555 "w_mbytes_per_sec": 0 00:30:26.555 }, 00:30:26.555 "claimed": true, 00:30:26.555 "claim_type": "exclusive_write", 00:30:26.555 "zoned": false, 00:30:26.555 "supported_io_types": { 00:30:26.555 "read": true, 00:30:26.555 "write": true, 00:30:26.555 "unmap": true, 00:30:26.555 "flush": true, 00:30:26.555 "reset": true, 00:30:26.555 "nvme_admin": false, 00:30:26.555 "nvme_io": false, 00:30:26.555 "nvme_io_md": false, 00:30:26.555 "write_zeroes": true, 00:30:26.555 "zcopy": true, 00:30:26.555 "get_zone_info": false, 00:30:26.555 "zone_management": false, 00:30:26.555 "zone_append": false, 00:30:26.555 "compare": false, 00:30:26.555 "compare_and_write": false, 00:30:26.555 "abort": true, 00:30:26.555 "seek_hole": false, 00:30:26.555 "seek_data": false, 00:30:26.555 "copy": true, 00:30:26.555 "nvme_iov_md": false 00:30:26.555 }, 00:30:26.555 "memory_domains": [ 00:30:26.555 { 00:30:26.555 "dma_device_id": "system", 00:30:26.555 "dma_device_type": 1 00:30:26.555 }, 00:30:26.555 { 00:30:26.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.555 "dma_device_type": 2 00:30:26.555 } 00:30:26.555 ], 00:30:26.555 "driver_specific": {} 00:30:26.555 } 00:30:26.555 ] 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:26.555 "name": "Existed_Raid", 00:30:26.555 "uuid": "370cc88c-4396-4993-9835-1487f9f815a3", 00:30:26.555 "strip_size_kb": 64, 00:30:26.555 "state": "online", 00:30:26.555 "raid_level": "raid0", 00:30:26.555 "superblock": false, 00:30:26.555 "num_base_bdevs": 3, 00:30:26.555 "num_base_bdevs_discovered": 3, 00:30:26.555 "num_base_bdevs_operational": 3, 00:30:26.555 "base_bdevs_list": [ 00:30:26.555 { 00:30:26.555 "name": "BaseBdev1", 00:30:26.555 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:26.555 "is_configured": true, 00:30:26.555 "data_offset": 0, 00:30:26.555 "data_size": 65536 00:30:26.555 }, 00:30:26.555 { 00:30:26.555 "name": "BaseBdev2", 00:30:26.555 "uuid": "8fd6b5a4-7215-451b-9722-75f93a248986", 00:30:26.555 "is_configured": true, 00:30:26.555 "data_offset": 0, 00:30:26.555 "data_size": 65536 00:30:26.555 }, 00:30:26.555 { 00:30:26.555 "name": "BaseBdev3", 00:30:26.555 "uuid": "97ec66d2-2507-4ca0-824a-7395624c64a9", 00:30:26.555 "is_configured": true, 00:30:26.555 "data_offset": 0, 00:30:26.555 "data_size": 65536 00:30:26.555 } 00:30:26.555 ] 00:30:26.555 }' 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:26.555 13:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:26.814 [2024-10-09 13:59:33.325163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.814 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.814 "name": "Existed_Raid", 00:30:26.814 "aliases": [ 00:30:26.814 "370cc88c-4396-4993-9835-1487f9f815a3" 00:30:26.814 ], 00:30:26.814 "product_name": "Raid Volume", 00:30:26.814 "block_size": 512, 00:30:26.814 "num_blocks": 196608, 00:30:26.814 "uuid": "370cc88c-4396-4993-9835-1487f9f815a3", 00:30:26.814 "assigned_rate_limits": { 00:30:26.814 "rw_ios_per_sec": 0, 00:30:26.814 "rw_mbytes_per_sec": 0, 00:30:26.814 "r_mbytes_per_sec": 0, 00:30:26.814 "w_mbytes_per_sec": 0 00:30:26.814 }, 00:30:26.814 "claimed": false, 00:30:26.814 "zoned": false, 00:30:26.814 "supported_io_types": { 00:30:26.814 "read": true, 00:30:26.814 "write": true, 00:30:26.814 "unmap": true, 00:30:26.814 "flush": true, 00:30:26.814 "reset": true, 00:30:26.814 "nvme_admin": false, 00:30:26.814 "nvme_io": false, 00:30:26.814 "nvme_io_md": false, 00:30:26.814 "write_zeroes": true, 00:30:26.814 "zcopy": false, 00:30:26.814 "get_zone_info": false, 00:30:26.815 "zone_management": false, 00:30:26.815 "zone_append": false, 00:30:26.815 "compare": false, 00:30:26.815 "compare_and_write": false, 00:30:26.815 "abort": false, 00:30:26.815 "seek_hole": false, 00:30:26.815 "seek_data": false, 00:30:26.815 "copy": false, 00:30:26.815 "nvme_iov_md": false 00:30:26.815 }, 00:30:26.815 "memory_domains": [ 00:30:26.815 { 00:30:26.815 "dma_device_id": "system", 00:30:26.815 "dma_device_type": 1 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.815 "dma_device_type": 2 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "dma_device_id": "system", 00:30:26.815 "dma_device_type": 1 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.815 "dma_device_type": 2 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "dma_device_id": "system", 00:30:26.815 "dma_device_type": 1 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:26.815 "dma_device_type": 2 00:30:26.815 } 00:30:26.815 ], 00:30:26.815 "driver_specific": { 00:30:26.815 "raid": { 00:30:26.815 "uuid": "370cc88c-4396-4993-9835-1487f9f815a3", 00:30:26.815 "strip_size_kb": 64, 00:30:26.815 "state": "online", 00:30:26.815 "raid_level": "raid0", 00:30:26.815 "superblock": false, 00:30:26.815 "num_base_bdevs": 3, 00:30:26.815 "num_base_bdevs_discovered": 3, 00:30:26.815 "num_base_bdevs_operational": 3, 00:30:26.815 "base_bdevs_list": [ 00:30:26.815 { 00:30:26.815 "name": "BaseBdev1", 00:30:26.815 "uuid": "bdc1b9a2-a788-4f74-ab7b-f8a6e013208f", 00:30:26.815 "is_configured": true, 00:30:26.815 "data_offset": 0, 00:30:26.815 "data_size": 65536 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "name": "BaseBdev2", 00:30:26.815 "uuid": "8fd6b5a4-7215-451b-9722-75f93a248986", 00:30:26.815 "is_configured": true, 00:30:26.815 "data_offset": 0, 00:30:26.815 "data_size": 65536 00:30:26.815 }, 00:30:26.815 { 00:30:26.815 "name": "BaseBdev3", 00:30:26.815 "uuid": "97ec66d2-2507-4ca0-824a-7395624c64a9", 00:30:26.815 "is_configured": true, 00:30:26.815 "data_offset": 0, 00:30:26.815 "data_size": 65536 00:30:26.815 } 00:30:26.815 ] 00:30:26.815 } 00:30:26.815 } 00:30:26.815 }' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:27.074 BaseBdev2 00:30:27.074 BaseBdev3' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.074 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.075 [2024-10-09 13:59:33.601000] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:27.075 [2024-10-09 13:59:33.601030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:27.075 [2024-10-09 13:59:33.601099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.075 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.334 "name": "Existed_Raid", 00:30:27.334 "uuid": "370cc88c-4396-4993-9835-1487f9f815a3", 00:30:27.334 "strip_size_kb": 64, 00:30:27.334 "state": "offline", 00:30:27.334 "raid_level": "raid0", 00:30:27.334 "superblock": false, 00:30:27.334 "num_base_bdevs": 3, 00:30:27.334 "num_base_bdevs_discovered": 2, 00:30:27.334 "num_base_bdevs_operational": 2, 00:30:27.334 "base_bdevs_list": [ 00:30:27.334 { 00:30:27.334 "name": null, 00:30:27.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.334 "is_configured": false, 00:30:27.334 "data_offset": 0, 00:30:27.334 "data_size": 65536 00:30:27.334 }, 00:30:27.334 { 00:30:27.334 "name": "BaseBdev2", 00:30:27.334 "uuid": "8fd6b5a4-7215-451b-9722-75f93a248986", 00:30:27.334 "is_configured": true, 00:30:27.334 "data_offset": 0, 00:30:27.334 "data_size": 65536 00:30:27.334 }, 00:30:27.334 { 00:30:27.334 "name": "BaseBdev3", 00:30:27.334 "uuid": "97ec66d2-2507-4ca0-824a-7395624c64a9", 00:30:27.334 "is_configured": true, 00:30:27.334 "data_offset": 0, 00:30:27.334 "data_size": 65536 00:30:27.334 } 00:30:27.334 ] 00:30:27.334 }' 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.334 13:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.593 [2024-10-09 13:59:34.117877] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:27.593 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 [2024-10-09 13:59:34.186473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:27.853 [2024-10-09 13:59:34.186526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 BaseBdev2 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 [ 00:30:27.853 { 00:30:27.853 "name": "BaseBdev2", 00:30:27.853 "aliases": [ 00:30:27.853 "90265473-51e5-4499-9bb9-919aa557a2e7" 00:30:27.853 ], 00:30:27.853 "product_name": "Malloc disk", 00:30:27.853 "block_size": 512, 00:30:27.853 "num_blocks": 65536, 00:30:27.853 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:27.853 "assigned_rate_limits": { 00:30:27.853 "rw_ios_per_sec": 0, 00:30:27.853 "rw_mbytes_per_sec": 0, 00:30:27.853 "r_mbytes_per_sec": 0, 00:30:27.853 "w_mbytes_per_sec": 0 00:30:27.853 }, 00:30:27.853 "claimed": false, 00:30:27.853 "zoned": false, 00:30:27.853 "supported_io_types": { 00:30:27.853 "read": true, 00:30:27.853 "write": true, 00:30:27.853 "unmap": true, 00:30:27.853 "flush": true, 00:30:27.853 "reset": true, 00:30:27.853 "nvme_admin": false, 00:30:27.853 "nvme_io": false, 00:30:27.853 "nvme_io_md": false, 00:30:27.853 "write_zeroes": true, 00:30:27.853 "zcopy": true, 00:30:27.853 "get_zone_info": false, 00:30:27.853 "zone_management": false, 00:30:27.853 "zone_append": false, 00:30:27.853 "compare": false, 00:30:27.853 "compare_and_write": false, 00:30:27.853 "abort": true, 00:30:27.853 "seek_hole": false, 00:30:27.853 "seek_data": false, 00:30:27.853 "copy": true, 00:30:27.853 "nvme_iov_md": false 00:30:27.853 }, 00:30:27.853 "memory_domains": [ 00:30:27.853 { 00:30:27.853 "dma_device_id": "system", 00:30:27.853 "dma_device_type": 1 00:30:27.853 }, 00:30:27.853 { 00:30:27.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.853 "dma_device_type": 2 00:30:27.853 } 00:30:27.853 ], 00:30:27.853 "driver_specific": {} 00:30:27.853 } 00:30:27.853 ] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 BaseBdev3 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.853 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.854 [ 00:30:27.854 { 00:30:27.854 "name": "BaseBdev3", 00:30:27.854 "aliases": [ 00:30:27.854 "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8" 00:30:27.854 ], 00:30:27.854 "product_name": "Malloc disk", 00:30:27.854 "block_size": 512, 00:30:27.854 "num_blocks": 65536, 00:30:27.854 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:27.854 "assigned_rate_limits": { 00:30:27.854 "rw_ios_per_sec": 0, 00:30:27.854 "rw_mbytes_per_sec": 0, 00:30:27.854 "r_mbytes_per_sec": 0, 00:30:27.854 "w_mbytes_per_sec": 0 00:30:27.854 }, 00:30:27.854 "claimed": false, 00:30:27.854 "zoned": false, 00:30:27.854 "supported_io_types": { 00:30:27.854 "read": true, 00:30:27.854 "write": true, 00:30:27.854 "unmap": true, 00:30:27.854 "flush": true, 00:30:27.854 "reset": true, 00:30:27.854 "nvme_admin": false, 00:30:27.854 "nvme_io": false, 00:30:27.854 "nvme_io_md": false, 00:30:27.854 "write_zeroes": true, 00:30:27.854 "zcopy": true, 00:30:27.854 "get_zone_info": false, 00:30:27.854 "zone_management": false, 00:30:27.854 "zone_append": false, 00:30:27.854 "compare": false, 00:30:27.854 "compare_and_write": false, 00:30:27.854 "abort": true, 00:30:27.854 "seek_hole": false, 00:30:27.854 "seek_data": false, 00:30:27.854 "copy": true, 00:30:27.854 "nvme_iov_md": false 00:30:27.854 }, 00:30:27.854 "memory_domains": [ 00:30:27.854 { 00:30:27.854 "dma_device_id": "system", 00:30:27.854 "dma_device_type": 1 00:30:27.854 }, 00:30:27.854 { 00:30:27.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:27.854 "dma_device_type": 2 00:30:27.854 } 00:30:27.854 ], 00:30:27.854 "driver_specific": {} 00:30:27.854 } 00:30:27.854 ] 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.854 [2024-10-09 13:59:34.346332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:27.854 [2024-10-09 13:59:34.346509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:27.854 [2024-10-09 13:59:34.346565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:27.854 [2024-10-09 13:59:34.349064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:27.854 "name": "Existed_Raid", 00:30:27.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.854 "strip_size_kb": 64, 00:30:27.854 "state": "configuring", 00:30:27.854 "raid_level": "raid0", 00:30:27.854 "superblock": false, 00:30:27.854 "num_base_bdevs": 3, 00:30:27.854 "num_base_bdevs_discovered": 2, 00:30:27.854 "num_base_bdevs_operational": 3, 00:30:27.854 "base_bdevs_list": [ 00:30:27.854 { 00:30:27.854 "name": "BaseBdev1", 00:30:27.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.854 "is_configured": false, 00:30:27.854 "data_offset": 0, 00:30:27.854 "data_size": 0 00:30:27.854 }, 00:30:27.854 { 00:30:27.854 "name": "BaseBdev2", 00:30:27.854 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:27.854 "is_configured": true, 00:30:27.854 "data_offset": 0, 00:30:27.854 "data_size": 65536 00:30:27.854 }, 00:30:27.854 { 00:30:27.854 "name": "BaseBdev3", 00:30:27.854 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:27.854 "is_configured": true, 00:30:27.854 "data_offset": 0, 00:30:27.854 "data_size": 65536 00:30:27.854 } 00:30:27.854 ] 00:30:27.854 }' 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:27.854 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.422 [2024-10-09 13:59:34.786416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.422 "name": "Existed_Raid", 00:30:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.422 "strip_size_kb": 64, 00:30:28.422 "state": "configuring", 00:30:28.422 "raid_level": "raid0", 00:30:28.422 "superblock": false, 00:30:28.422 "num_base_bdevs": 3, 00:30:28.422 "num_base_bdevs_discovered": 1, 00:30:28.422 "num_base_bdevs_operational": 3, 00:30:28.422 "base_bdevs_list": [ 00:30:28.422 { 00:30:28.422 "name": "BaseBdev1", 00:30:28.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.422 "is_configured": false, 00:30:28.422 "data_offset": 0, 00:30:28.422 "data_size": 0 00:30:28.422 }, 00:30:28.422 { 00:30:28.422 "name": null, 00:30:28.422 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:28.422 "is_configured": false, 00:30:28.422 "data_offset": 0, 00:30:28.422 "data_size": 65536 00:30:28.422 }, 00:30:28.422 { 00:30:28.422 "name": "BaseBdev3", 00:30:28.422 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:28.422 "is_configured": true, 00:30:28.422 "data_offset": 0, 00:30:28.422 "data_size": 65536 00:30:28.422 } 00:30:28.422 ] 00:30:28.422 }' 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.422 13:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 [2024-10-09 13:59:35.297504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:28.991 BaseBdev1 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 [ 00:30:28.991 { 00:30:28.991 "name": "BaseBdev1", 00:30:28.991 "aliases": [ 00:30:28.991 "0551abcd-0da7-437c-9bf2-6137f8f8aec0" 00:30:28.991 ], 00:30:28.991 "product_name": "Malloc disk", 00:30:28.991 "block_size": 512, 00:30:28.991 "num_blocks": 65536, 00:30:28.991 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:28.991 "assigned_rate_limits": { 00:30:28.991 "rw_ios_per_sec": 0, 00:30:28.991 "rw_mbytes_per_sec": 0, 00:30:28.991 "r_mbytes_per_sec": 0, 00:30:28.991 "w_mbytes_per_sec": 0 00:30:28.991 }, 00:30:28.991 "claimed": true, 00:30:28.991 "claim_type": "exclusive_write", 00:30:28.991 "zoned": false, 00:30:28.991 "supported_io_types": { 00:30:28.991 "read": true, 00:30:28.991 "write": true, 00:30:28.991 "unmap": true, 00:30:28.991 "flush": true, 00:30:28.991 "reset": true, 00:30:28.991 "nvme_admin": false, 00:30:28.991 "nvme_io": false, 00:30:28.991 "nvme_io_md": false, 00:30:28.991 "write_zeroes": true, 00:30:28.991 "zcopy": true, 00:30:28.991 "get_zone_info": false, 00:30:28.991 "zone_management": false, 00:30:28.991 "zone_append": false, 00:30:28.991 "compare": false, 00:30:28.991 "compare_and_write": false, 00:30:28.991 "abort": true, 00:30:28.991 "seek_hole": false, 00:30:28.991 "seek_data": false, 00:30:28.991 "copy": true, 00:30:28.991 "nvme_iov_md": false 00:30:28.991 }, 00:30:28.991 "memory_domains": [ 00:30:28.991 { 00:30:28.991 "dma_device_id": "system", 00:30:28.991 "dma_device_type": 1 00:30:28.991 }, 00:30:28.991 { 00:30:28.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:28.991 "dma_device_type": 2 00:30:28.991 } 00:30:28.991 ], 00:30:28.991 "driver_specific": {} 00:30:28.991 } 00:30:28.991 ] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:28.991 "name": "Existed_Raid", 00:30:28.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.991 "strip_size_kb": 64, 00:30:28.991 "state": "configuring", 00:30:28.991 "raid_level": "raid0", 00:30:28.991 "superblock": false, 00:30:28.991 "num_base_bdevs": 3, 00:30:28.991 "num_base_bdevs_discovered": 2, 00:30:28.991 "num_base_bdevs_operational": 3, 00:30:28.991 "base_bdevs_list": [ 00:30:28.991 { 00:30:28.991 "name": "BaseBdev1", 00:30:28.991 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:28.991 "is_configured": true, 00:30:28.991 "data_offset": 0, 00:30:28.991 "data_size": 65536 00:30:28.991 }, 00:30:28.991 { 00:30:28.991 "name": null, 00:30:28.991 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:28.991 "is_configured": false, 00:30:28.991 "data_offset": 0, 00:30:28.991 "data_size": 65536 00:30:28.991 }, 00:30:28.991 { 00:30:28.991 "name": "BaseBdev3", 00:30:28.991 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:28.991 "is_configured": true, 00:30:28.991 "data_offset": 0, 00:30:28.991 "data_size": 65536 00:30:28.991 } 00:30:28.991 ] 00:30:28.991 }' 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:28.991 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.250 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.250 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.250 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.250 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.509 [2024-10-09 13:59:35.841709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.509 "name": "Existed_Raid", 00:30:29.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.509 "strip_size_kb": 64, 00:30:29.509 "state": "configuring", 00:30:29.509 "raid_level": "raid0", 00:30:29.509 "superblock": false, 00:30:29.509 "num_base_bdevs": 3, 00:30:29.509 "num_base_bdevs_discovered": 1, 00:30:29.509 "num_base_bdevs_operational": 3, 00:30:29.509 "base_bdevs_list": [ 00:30:29.509 { 00:30:29.509 "name": "BaseBdev1", 00:30:29.509 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:29.509 "is_configured": true, 00:30:29.509 "data_offset": 0, 00:30:29.509 "data_size": 65536 00:30:29.509 }, 00:30:29.509 { 00:30:29.509 "name": null, 00:30:29.509 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:29.509 "is_configured": false, 00:30:29.509 "data_offset": 0, 00:30:29.509 "data_size": 65536 00:30:29.509 }, 00:30:29.509 { 00:30:29.509 "name": null, 00:30:29.509 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:29.509 "is_configured": false, 00:30:29.509 "data_offset": 0, 00:30:29.509 "data_size": 65536 00:30:29.509 } 00:30:29.509 ] 00:30:29.509 }' 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.509 13:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.768 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.768 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:29.768 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.768 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.027 [2024-10-09 13:59:36.353856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.027 "name": "Existed_Raid", 00:30:30.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.027 "strip_size_kb": 64, 00:30:30.027 "state": "configuring", 00:30:30.027 "raid_level": "raid0", 00:30:30.027 "superblock": false, 00:30:30.027 "num_base_bdevs": 3, 00:30:30.027 "num_base_bdevs_discovered": 2, 00:30:30.027 "num_base_bdevs_operational": 3, 00:30:30.027 "base_bdevs_list": [ 00:30:30.027 { 00:30:30.027 "name": "BaseBdev1", 00:30:30.027 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:30.027 "is_configured": true, 00:30:30.027 "data_offset": 0, 00:30:30.027 "data_size": 65536 00:30:30.027 }, 00:30:30.027 { 00:30:30.027 "name": null, 00:30:30.027 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:30.027 "is_configured": false, 00:30:30.027 "data_offset": 0, 00:30:30.027 "data_size": 65536 00:30:30.027 }, 00:30:30.027 { 00:30:30.027 "name": "BaseBdev3", 00:30:30.027 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:30.027 "is_configured": true, 00:30:30.027 "data_offset": 0, 00:30:30.027 "data_size": 65536 00:30:30.027 } 00:30:30.027 ] 00:30:30.027 }' 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.027 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.287 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.287 [2024-10-09 13:59:36.825961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.546 "name": "Existed_Raid", 00:30:30.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.546 "strip_size_kb": 64, 00:30:30.546 "state": "configuring", 00:30:30.546 "raid_level": "raid0", 00:30:30.546 "superblock": false, 00:30:30.546 "num_base_bdevs": 3, 00:30:30.546 "num_base_bdevs_discovered": 1, 00:30:30.546 "num_base_bdevs_operational": 3, 00:30:30.546 "base_bdevs_list": [ 00:30:30.546 { 00:30:30.546 "name": null, 00:30:30.546 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:30.546 "is_configured": false, 00:30:30.546 "data_offset": 0, 00:30:30.546 "data_size": 65536 00:30:30.546 }, 00:30:30.546 { 00:30:30.546 "name": null, 00:30:30.546 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:30.546 "is_configured": false, 00:30:30.546 "data_offset": 0, 00:30:30.546 "data_size": 65536 00:30:30.546 }, 00:30:30.546 { 00:30:30.546 "name": "BaseBdev3", 00:30:30.546 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:30.546 "is_configured": true, 00:30:30.546 "data_offset": 0, 00:30:30.546 "data_size": 65536 00:30:30.546 } 00:30:30.546 ] 00:30:30.546 }' 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.546 13:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.805 [2024-10-09 13:59:37.308854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.805 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.064 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.064 "name": "Existed_Raid", 00:30:31.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:31.064 "strip_size_kb": 64, 00:30:31.064 "state": "configuring", 00:30:31.064 "raid_level": "raid0", 00:30:31.064 "superblock": false, 00:30:31.064 "num_base_bdevs": 3, 00:30:31.064 "num_base_bdevs_discovered": 2, 00:30:31.064 "num_base_bdevs_operational": 3, 00:30:31.064 "base_bdevs_list": [ 00:30:31.064 { 00:30:31.064 "name": null, 00:30:31.064 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:31.064 "is_configured": false, 00:30:31.064 "data_offset": 0, 00:30:31.064 "data_size": 65536 00:30:31.064 }, 00:30:31.064 { 00:30:31.064 "name": "BaseBdev2", 00:30:31.064 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:31.064 "is_configured": true, 00:30:31.064 "data_offset": 0, 00:30:31.064 "data_size": 65536 00:30:31.064 }, 00:30:31.064 { 00:30:31.064 "name": "BaseBdev3", 00:30:31.064 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:31.064 "is_configured": true, 00:30:31.064 "data_offset": 0, 00:30:31.064 "data_size": 65536 00:30:31.064 } 00:30:31.064 ] 00:30:31.064 }' 00:30:31.064 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.064 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.322 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:31.323 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.323 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.323 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:31.323 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.323 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0551abcd-0da7-437c-9bf2-6137f8f8aec0 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 [2024-10-09 13:59:37.884225] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:31.581 [2024-10-09 13:59:37.884266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:30:31.581 [2024-10-09 13:59:37.884278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:31.581 [2024-10-09 13:59:37.884569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:31.581 [2024-10-09 13:59:37.884688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:30:31.581 [2024-10-09 13:59:37.884699] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:30:31.581 [2024-10-09 13:59:37.884903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:31.581 NewBaseBdev 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.581 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.581 [ 00:30:31.581 { 00:30:31.581 "name": "NewBaseBdev", 00:30:31.581 "aliases": [ 00:30:31.581 "0551abcd-0da7-437c-9bf2-6137f8f8aec0" 00:30:31.581 ], 00:30:31.581 "product_name": "Malloc disk", 00:30:31.581 "block_size": 512, 00:30:31.581 "num_blocks": 65536, 00:30:31.581 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:31.582 "assigned_rate_limits": { 00:30:31.582 "rw_ios_per_sec": 0, 00:30:31.582 "rw_mbytes_per_sec": 0, 00:30:31.582 "r_mbytes_per_sec": 0, 00:30:31.582 "w_mbytes_per_sec": 0 00:30:31.582 }, 00:30:31.582 "claimed": true, 00:30:31.582 "claim_type": "exclusive_write", 00:30:31.582 "zoned": false, 00:30:31.582 "supported_io_types": { 00:30:31.582 "read": true, 00:30:31.582 "write": true, 00:30:31.582 "unmap": true, 00:30:31.582 "flush": true, 00:30:31.582 "reset": true, 00:30:31.582 "nvme_admin": false, 00:30:31.582 "nvme_io": false, 00:30:31.582 "nvme_io_md": false, 00:30:31.582 "write_zeroes": true, 00:30:31.582 "zcopy": true, 00:30:31.582 "get_zone_info": false, 00:30:31.582 "zone_management": false, 00:30:31.582 "zone_append": false, 00:30:31.582 "compare": false, 00:30:31.582 "compare_and_write": false, 00:30:31.582 "abort": true, 00:30:31.582 "seek_hole": false, 00:30:31.582 "seek_data": false, 00:30:31.582 "copy": true, 00:30:31.582 "nvme_iov_md": false 00:30:31.582 }, 00:30:31.582 "memory_domains": [ 00:30:31.582 { 00:30:31.582 "dma_device_id": "system", 00:30:31.582 "dma_device_type": 1 00:30:31.582 }, 00:30:31.582 { 00:30:31.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:31.582 "dma_device_type": 2 00:30:31.582 } 00:30:31.582 ], 00:30:31.582 "driver_specific": {} 00:30:31.582 } 00:30:31.582 ] 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:31.582 "name": "Existed_Raid", 00:30:31.582 "uuid": "5049a880-4d92-4cf3-9409-bb61b23ad953", 00:30:31.582 "strip_size_kb": 64, 00:30:31.582 "state": "online", 00:30:31.582 "raid_level": "raid0", 00:30:31.582 "superblock": false, 00:30:31.582 "num_base_bdevs": 3, 00:30:31.582 "num_base_bdevs_discovered": 3, 00:30:31.582 "num_base_bdevs_operational": 3, 00:30:31.582 "base_bdevs_list": [ 00:30:31.582 { 00:30:31.582 "name": "NewBaseBdev", 00:30:31.582 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:31.582 "is_configured": true, 00:30:31.582 "data_offset": 0, 00:30:31.582 "data_size": 65536 00:30:31.582 }, 00:30:31.582 { 00:30:31.582 "name": "BaseBdev2", 00:30:31.582 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:31.582 "is_configured": true, 00:30:31.582 "data_offset": 0, 00:30:31.582 "data_size": 65536 00:30:31.582 }, 00:30:31.582 { 00:30:31.582 "name": "BaseBdev3", 00:30:31.582 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:31.582 "is_configured": true, 00:30:31.582 "data_offset": 0, 00:30:31.582 "data_size": 65536 00:30:31.582 } 00:30:31.582 ] 00:30:31.582 }' 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:31.582 13:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:31.841 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.841 [2024-10-09 13:59:38.372763] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.098 "name": "Existed_Raid", 00:30:32.098 "aliases": [ 00:30:32.098 "5049a880-4d92-4cf3-9409-bb61b23ad953" 00:30:32.098 ], 00:30:32.098 "product_name": "Raid Volume", 00:30:32.098 "block_size": 512, 00:30:32.098 "num_blocks": 196608, 00:30:32.098 "uuid": "5049a880-4d92-4cf3-9409-bb61b23ad953", 00:30:32.098 "assigned_rate_limits": { 00:30:32.098 "rw_ios_per_sec": 0, 00:30:32.098 "rw_mbytes_per_sec": 0, 00:30:32.098 "r_mbytes_per_sec": 0, 00:30:32.098 "w_mbytes_per_sec": 0 00:30:32.098 }, 00:30:32.098 "claimed": false, 00:30:32.098 "zoned": false, 00:30:32.098 "supported_io_types": { 00:30:32.098 "read": true, 00:30:32.098 "write": true, 00:30:32.098 "unmap": true, 00:30:32.098 "flush": true, 00:30:32.098 "reset": true, 00:30:32.098 "nvme_admin": false, 00:30:32.098 "nvme_io": false, 00:30:32.098 "nvme_io_md": false, 00:30:32.098 "write_zeroes": true, 00:30:32.098 "zcopy": false, 00:30:32.098 "get_zone_info": false, 00:30:32.098 "zone_management": false, 00:30:32.098 "zone_append": false, 00:30:32.098 "compare": false, 00:30:32.098 "compare_and_write": false, 00:30:32.098 "abort": false, 00:30:32.098 "seek_hole": false, 00:30:32.098 "seek_data": false, 00:30:32.098 "copy": false, 00:30:32.098 "nvme_iov_md": false 00:30:32.098 }, 00:30:32.098 "memory_domains": [ 00:30:32.098 { 00:30:32.098 "dma_device_id": "system", 00:30:32.098 "dma_device_type": 1 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:32.098 "dma_device_type": 2 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "dma_device_id": "system", 00:30:32.098 "dma_device_type": 1 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:32.098 "dma_device_type": 2 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "dma_device_id": "system", 00:30:32.098 "dma_device_type": 1 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:32.098 "dma_device_type": 2 00:30:32.098 } 00:30:32.098 ], 00:30:32.098 "driver_specific": { 00:30:32.098 "raid": { 00:30:32.098 "uuid": "5049a880-4d92-4cf3-9409-bb61b23ad953", 00:30:32.098 "strip_size_kb": 64, 00:30:32.098 "state": "online", 00:30:32.098 "raid_level": "raid0", 00:30:32.098 "superblock": false, 00:30:32.098 "num_base_bdevs": 3, 00:30:32.098 "num_base_bdevs_discovered": 3, 00:30:32.098 "num_base_bdevs_operational": 3, 00:30:32.098 "base_bdevs_list": [ 00:30:32.098 { 00:30:32.098 "name": "NewBaseBdev", 00:30:32.098 "uuid": "0551abcd-0da7-437c-9bf2-6137f8f8aec0", 00:30:32.098 "is_configured": true, 00:30:32.098 "data_offset": 0, 00:30:32.098 "data_size": 65536 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "name": "BaseBdev2", 00:30:32.098 "uuid": "90265473-51e5-4499-9bb9-919aa557a2e7", 00:30:32.098 "is_configured": true, 00:30:32.098 "data_offset": 0, 00:30:32.098 "data_size": 65536 00:30:32.098 }, 00:30:32.098 { 00:30:32.098 "name": "BaseBdev3", 00:30:32.098 "uuid": "1fc15a0d-bcb6-4055-8541-3a5c7b8ab3b8", 00:30:32.098 "is_configured": true, 00:30:32.098 "data_offset": 0, 00:30:32.098 "data_size": 65536 00:30:32.098 } 00:30:32.098 ] 00:30:32.098 } 00:30:32.098 } 00:30:32.098 }' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:32.098 BaseBdev2 00:30:32.098 BaseBdev3' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:32.098 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:32.099 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.356 [2024-10-09 13:59:38.648492] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:32.356 [2024-10-09 13:59:38.648523] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:32.356 [2024-10-09 13:59:38.648629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:32.356 [2024-10-09 13:59:38.648716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:32.356 [2024-10-09 13:59:38.648732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75343 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75343 ']' 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75343 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75343 00:30:32.356 killing process with pid 75343 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75343' 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75343 00:30:32.356 [2024-10-09 13:59:38.698817] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:32.356 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75343 00:30:32.356 [2024-10-09 13:59:38.731429] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:32.614 13:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:32.614 00:30:32.614 real 0m9.135s 00:30:32.614 user 0m15.728s 00:30:32.614 sys 0m1.881s 00:30:32.614 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:32.614 ************************************ 00:30:32.614 END TEST raid_state_function_test 00:30:32.614 ************************************ 00:30:32.614 13:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.614 13:59:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:30:32.614 13:59:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:32.614 13:59:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:32.614 13:59:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.614 ************************************ 00:30:32.614 START TEST raid_state_function_test_sb 00:30:32.614 ************************************ 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:32.614 Process raid pid: 75953 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75953 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75953' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75953 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75953 ']' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:32.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:32.614 13:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:32.872 [2024-10-09 13:59:39.181104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:32.872 [2024-10-09 13:59:39.181312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.872 [2024-10-09 13:59:39.371132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.130 [2024-10-09 13:59:39.430705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.130 [2024-10-09 13:59:39.482641] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.130 [2024-10-09 13:59:39.482691] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.697 [2024-10-09 13:59:40.200510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:33.697 [2024-10-09 13:59:40.200579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:33.697 [2024-10-09 13:59:40.200599] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:33.697 [2024-10-09 13:59:40.200615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:33.697 [2024-10-09 13:59:40.200624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:33.697 [2024-10-09 13:59:40.200642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:33.697 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:33.955 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.955 "name": "Existed_Raid", 00:30:33.955 "uuid": "e27bc756-1910-49b4-88ba-ca4e3678fa1a", 00:30:33.955 "strip_size_kb": 64, 00:30:33.955 "state": "configuring", 00:30:33.955 "raid_level": "raid0", 00:30:33.955 "superblock": true, 00:30:33.955 "num_base_bdevs": 3, 00:30:33.955 "num_base_bdevs_discovered": 0, 00:30:33.955 "num_base_bdevs_operational": 3, 00:30:33.955 "base_bdevs_list": [ 00:30:33.955 { 00:30:33.955 "name": "BaseBdev1", 00:30:33.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.955 "is_configured": false, 00:30:33.955 "data_offset": 0, 00:30:33.955 "data_size": 0 00:30:33.955 }, 00:30:33.955 { 00:30:33.955 "name": "BaseBdev2", 00:30:33.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.955 "is_configured": false, 00:30:33.955 "data_offset": 0, 00:30:33.955 "data_size": 0 00:30:33.955 }, 00:30:33.955 { 00:30:33.955 "name": "BaseBdev3", 00:30:33.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.955 "is_configured": false, 00:30:33.955 "data_offset": 0, 00:30:33.955 "data_size": 0 00:30:33.955 } 00:30:33.955 ] 00:30:33.955 }' 00:30:33.955 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.955 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.213 [2024-10-09 13:59:40.680507] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.213 [2024-10-09 13:59:40.680556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.213 [2024-10-09 13:59:40.688589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:34.213 [2024-10-09 13:59:40.688654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:34.213 [2024-10-09 13:59:40.688667] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:34.213 [2024-10-09 13:59:40.688683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:34.213 [2024-10-09 13:59:40.688693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:34.213 [2024-10-09 13:59:40.688734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.213 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.214 [2024-10-09 13:59:40.706692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:34.214 BaseBdev1 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.214 [ 00:30:34.214 { 00:30:34.214 "name": "BaseBdev1", 00:30:34.214 "aliases": [ 00:30:34.214 "236d5789-912c-45f3-87f9-13f184bd0cc2" 00:30:34.214 ], 00:30:34.214 "product_name": "Malloc disk", 00:30:34.214 "block_size": 512, 00:30:34.214 "num_blocks": 65536, 00:30:34.214 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:34.214 "assigned_rate_limits": { 00:30:34.214 "rw_ios_per_sec": 0, 00:30:34.214 "rw_mbytes_per_sec": 0, 00:30:34.214 "r_mbytes_per_sec": 0, 00:30:34.214 "w_mbytes_per_sec": 0 00:30:34.214 }, 00:30:34.214 "claimed": true, 00:30:34.214 "claim_type": "exclusive_write", 00:30:34.214 "zoned": false, 00:30:34.214 "supported_io_types": { 00:30:34.214 "read": true, 00:30:34.214 "write": true, 00:30:34.214 "unmap": true, 00:30:34.214 "flush": true, 00:30:34.214 "reset": true, 00:30:34.214 "nvme_admin": false, 00:30:34.214 "nvme_io": false, 00:30:34.214 "nvme_io_md": false, 00:30:34.214 "write_zeroes": true, 00:30:34.214 "zcopy": true, 00:30:34.214 "get_zone_info": false, 00:30:34.214 "zone_management": false, 00:30:34.214 "zone_append": false, 00:30:34.214 "compare": false, 00:30:34.214 "compare_and_write": false, 00:30:34.214 "abort": true, 00:30:34.214 "seek_hole": false, 00:30:34.214 "seek_data": false, 00:30:34.214 "copy": true, 00:30:34.214 "nvme_iov_md": false 00:30:34.214 }, 00:30:34.214 "memory_domains": [ 00:30:34.214 { 00:30:34.214 "dma_device_id": "system", 00:30:34.214 "dma_device_type": 1 00:30:34.214 }, 00:30:34.214 { 00:30:34.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.214 "dma_device_type": 2 00:30:34.214 } 00:30:34.214 ], 00:30:34.214 "driver_specific": {} 00:30:34.214 } 00:30:34.214 ] 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.214 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.472 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.472 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.472 "name": "Existed_Raid", 00:30:34.472 "uuid": "b62d7e5c-2104-4379-9301-cd410b36ca48", 00:30:34.472 "strip_size_kb": 64, 00:30:34.472 "state": "configuring", 00:30:34.472 "raid_level": "raid0", 00:30:34.472 "superblock": true, 00:30:34.472 "num_base_bdevs": 3, 00:30:34.472 "num_base_bdevs_discovered": 1, 00:30:34.472 "num_base_bdevs_operational": 3, 00:30:34.472 "base_bdevs_list": [ 00:30:34.472 { 00:30:34.472 "name": "BaseBdev1", 00:30:34.472 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:34.472 "is_configured": true, 00:30:34.472 "data_offset": 2048, 00:30:34.472 "data_size": 63488 00:30:34.472 }, 00:30:34.472 { 00:30:34.472 "name": "BaseBdev2", 00:30:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.472 "is_configured": false, 00:30:34.472 "data_offset": 0, 00:30:34.472 "data_size": 0 00:30:34.472 }, 00:30:34.472 { 00:30:34.472 "name": "BaseBdev3", 00:30:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.472 "is_configured": false, 00:30:34.472 "data_offset": 0, 00:30:34.472 "data_size": 0 00:30:34.472 } 00:30:34.472 ] 00:30:34.472 }' 00:30:34.472 13:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.472 13:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.730 [2024-10-09 13:59:41.230921] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.730 [2024-10-09 13:59:41.230980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.730 [2024-10-09 13:59:41.238926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:34.730 [2024-10-09 13:59:41.241151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:34.730 [2024-10-09 13:59:41.241198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:34.730 [2024-10-09 13:59:41.241209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:34.730 [2024-10-09 13:59:41.241223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:34.730 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.731 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:34.989 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.989 "name": "Existed_Raid", 00:30:34.989 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:34.989 "strip_size_kb": 64, 00:30:34.989 "state": "configuring", 00:30:34.989 "raid_level": "raid0", 00:30:34.989 "superblock": true, 00:30:34.989 "num_base_bdevs": 3, 00:30:34.989 "num_base_bdevs_discovered": 1, 00:30:34.989 "num_base_bdevs_operational": 3, 00:30:34.989 "base_bdevs_list": [ 00:30:34.989 { 00:30:34.989 "name": "BaseBdev1", 00:30:34.989 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:34.989 "is_configured": true, 00:30:34.989 "data_offset": 2048, 00:30:34.989 "data_size": 63488 00:30:34.989 }, 00:30:34.989 { 00:30:34.989 "name": "BaseBdev2", 00:30:34.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.989 "is_configured": false, 00:30:34.989 "data_offset": 0, 00:30:34.990 "data_size": 0 00:30:34.990 }, 00:30:34.990 { 00:30:34.990 "name": "BaseBdev3", 00:30:34.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.990 "is_configured": false, 00:30:34.990 "data_offset": 0, 00:30:34.990 "data_size": 0 00:30:34.990 } 00:30:34.990 ] 00:30:34.990 }' 00:30:34.990 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.990 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.248 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:35.248 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.249 [2024-10-09 13:59:41.751471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:35.249 BaseBdev2 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.249 [ 00:30:35.249 { 00:30:35.249 "name": "BaseBdev2", 00:30:35.249 "aliases": [ 00:30:35.249 "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8" 00:30:35.249 ], 00:30:35.249 "product_name": "Malloc disk", 00:30:35.249 "block_size": 512, 00:30:35.249 "num_blocks": 65536, 00:30:35.249 "uuid": "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8", 00:30:35.249 "assigned_rate_limits": { 00:30:35.249 "rw_ios_per_sec": 0, 00:30:35.249 "rw_mbytes_per_sec": 0, 00:30:35.249 "r_mbytes_per_sec": 0, 00:30:35.249 "w_mbytes_per_sec": 0 00:30:35.249 }, 00:30:35.249 "claimed": true, 00:30:35.249 "claim_type": "exclusive_write", 00:30:35.249 "zoned": false, 00:30:35.249 "supported_io_types": { 00:30:35.249 "read": true, 00:30:35.249 "write": true, 00:30:35.249 "unmap": true, 00:30:35.249 "flush": true, 00:30:35.249 "reset": true, 00:30:35.249 "nvme_admin": false, 00:30:35.249 "nvme_io": false, 00:30:35.249 "nvme_io_md": false, 00:30:35.249 "write_zeroes": true, 00:30:35.249 "zcopy": true, 00:30:35.249 "get_zone_info": false, 00:30:35.249 "zone_management": false, 00:30:35.249 "zone_append": false, 00:30:35.249 "compare": false, 00:30:35.249 "compare_and_write": false, 00:30:35.249 "abort": true, 00:30:35.249 "seek_hole": false, 00:30:35.249 "seek_data": false, 00:30:35.249 "copy": true, 00:30:35.249 "nvme_iov_md": false 00:30:35.249 }, 00:30:35.249 "memory_domains": [ 00:30:35.249 { 00:30:35.249 "dma_device_id": "system", 00:30:35.249 "dma_device_type": 1 00:30:35.249 }, 00:30:35.249 { 00:30:35.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.249 "dma_device_type": 2 00:30:35.249 } 00:30:35.249 ], 00:30:35.249 "driver_specific": {} 00:30:35.249 } 00:30:35.249 ] 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.249 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.508 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.508 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:35.508 "name": "Existed_Raid", 00:30:35.508 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:35.508 "strip_size_kb": 64, 00:30:35.508 "state": "configuring", 00:30:35.508 "raid_level": "raid0", 00:30:35.508 "superblock": true, 00:30:35.508 "num_base_bdevs": 3, 00:30:35.508 "num_base_bdevs_discovered": 2, 00:30:35.508 "num_base_bdevs_operational": 3, 00:30:35.508 "base_bdevs_list": [ 00:30:35.508 { 00:30:35.508 "name": "BaseBdev1", 00:30:35.508 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:35.508 "is_configured": true, 00:30:35.508 "data_offset": 2048, 00:30:35.508 "data_size": 63488 00:30:35.508 }, 00:30:35.508 { 00:30:35.508 "name": "BaseBdev2", 00:30:35.508 "uuid": "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8", 00:30:35.508 "is_configured": true, 00:30:35.508 "data_offset": 2048, 00:30:35.508 "data_size": 63488 00:30:35.508 }, 00:30:35.508 { 00:30:35.508 "name": "BaseBdev3", 00:30:35.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.508 "is_configured": false, 00:30:35.508 "data_offset": 0, 00:30:35.508 "data_size": 0 00:30:35.508 } 00:30:35.508 ] 00:30:35.508 }' 00:30:35.508 13:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:35.508 13:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.767 [2024-10-09 13:59:42.246697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:35.767 [2024-10-09 13:59:42.247070] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:35.767 [2024-10-09 13:59:42.247102] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:35.767 [2024-10-09 13:59:42.247431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:35.767 BaseBdev3 00:30:35.767 [2024-10-09 13:59:42.247572] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:35.767 [2024-10-09 13:59:42.247584] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:30:35.767 [2024-10-09 13:59:42.247714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.767 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.767 [ 00:30:35.767 { 00:30:35.767 "name": "BaseBdev3", 00:30:35.767 "aliases": [ 00:30:35.767 "a6caec0a-e6bd-4fd2-bc88-049dbf793bb9" 00:30:35.767 ], 00:30:35.767 "product_name": "Malloc disk", 00:30:35.767 "block_size": 512, 00:30:35.767 "num_blocks": 65536, 00:30:35.767 "uuid": "a6caec0a-e6bd-4fd2-bc88-049dbf793bb9", 00:30:35.767 "assigned_rate_limits": { 00:30:35.767 "rw_ios_per_sec": 0, 00:30:35.767 "rw_mbytes_per_sec": 0, 00:30:35.767 "r_mbytes_per_sec": 0, 00:30:35.767 "w_mbytes_per_sec": 0 00:30:35.767 }, 00:30:35.767 "claimed": true, 00:30:35.767 "claim_type": "exclusive_write", 00:30:35.767 "zoned": false, 00:30:35.767 "supported_io_types": { 00:30:35.767 "read": true, 00:30:35.767 "write": true, 00:30:35.767 "unmap": true, 00:30:35.767 "flush": true, 00:30:35.767 "reset": true, 00:30:35.768 "nvme_admin": false, 00:30:35.768 "nvme_io": false, 00:30:35.768 "nvme_io_md": false, 00:30:35.768 "write_zeroes": true, 00:30:35.768 "zcopy": true, 00:30:35.768 "get_zone_info": false, 00:30:35.768 "zone_management": false, 00:30:35.768 "zone_append": false, 00:30:35.768 "compare": false, 00:30:35.768 "compare_and_write": false, 00:30:35.768 "abort": true, 00:30:35.768 "seek_hole": false, 00:30:35.768 "seek_data": false, 00:30:35.768 "copy": true, 00:30:35.768 "nvme_iov_md": false 00:30:35.768 }, 00:30:35.768 "memory_domains": [ 00:30:35.768 { 00:30:35.768 "dma_device_id": "system", 00:30:35.768 "dma_device_type": 1 00:30:35.768 }, 00:30:35.768 { 00:30:35.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.768 "dma_device_type": 2 00:30:35.768 } 00:30:35.768 ], 00:30:35.768 "driver_specific": {} 00:30:35.768 } 00:30:35.768 ] 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:35.768 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.027 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.027 "name": "Existed_Raid", 00:30:36.027 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:36.027 "strip_size_kb": 64, 00:30:36.027 "state": "online", 00:30:36.027 "raid_level": "raid0", 00:30:36.027 "superblock": true, 00:30:36.027 "num_base_bdevs": 3, 00:30:36.027 "num_base_bdevs_discovered": 3, 00:30:36.027 "num_base_bdevs_operational": 3, 00:30:36.027 "base_bdevs_list": [ 00:30:36.027 { 00:30:36.027 "name": "BaseBdev1", 00:30:36.027 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:36.027 "is_configured": true, 00:30:36.027 "data_offset": 2048, 00:30:36.027 "data_size": 63488 00:30:36.027 }, 00:30:36.027 { 00:30:36.027 "name": "BaseBdev2", 00:30:36.027 "uuid": "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8", 00:30:36.027 "is_configured": true, 00:30:36.027 "data_offset": 2048, 00:30:36.027 "data_size": 63488 00:30:36.027 }, 00:30:36.027 { 00:30:36.027 "name": "BaseBdev3", 00:30:36.027 "uuid": "a6caec0a-e6bd-4fd2-bc88-049dbf793bb9", 00:30:36.027 "is_configured": true, 00:30:36.027 "data_offset": 2048, 00:30:36.027 "data_size": 63488 00:30:36.027 } 00:30:36.027 ] 00:30:36.027 }' 00:30:36.027 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.027 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.285 [2024-10-09 13:59:42.743144] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.285 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.285 "name": "Existed_Raid", 00:30:36.285 "aliases": [ 00:30:36.285 "1c18d284-3be1-4f49-b1d4-dcacde23fa61" 00:30:36.285 ], 00:30:36.285 "product_name": "Raid Volume", 00:30:36.285 "block_size": 512, 00:30:36.285 "num_blocks": 190464, 00:30:36.285 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:36.285 "assigned_rate_limits": { 00:30:36.285 "rw_ios_per_sec": 0, 00:30:36.285 "rw_mbytes_per_sec": 0, 00:30:36.285 "r_mbytes_per_sec": 0, 00:30:36.285 "w_mbytes_per_sec": 0 00:30:36.285 }, 00:30:36.285 "claimed": false, 00:30:36.285 "zoned": false, 00:30:36.285 "supported_io_types": { 00:30:36.285 "read": true, 00:30:36.285 "write": true, 00:30:36.285 "unmap": true, 00:30:36.285 "flush": true, 00:30:36.285 "reset": true, 00:30:36.285 "nvme_admin": false, 00:30:36.285 "nvme_io": false, 00:30:36.285 "nvme_io_md": false, 00:30:36.285 "write_zeroes": true, 00:30:36.285 "zcopy": false, 00:30:36.285 "get_zone_info": false, 00:30:36.285 "zone_management": false, 00:30:36.285 "zone_append": false, 00:30:36.285 "compare": false, 00:30:36.285 "compare_and_write": false, 00:30:36.285 "abort": false, 00:30:36.285 "seek_hole": false, 00:30:36.285 "seek_data": false, 00:30:36.285 "copy": false, 00:30:36.285 "nvme_iov_md": false 00:30:36.285 }, 00:30:36.285 "memory_domains": [ 00:30:36.285 { 00:30:36.285 "dma_device_id": "system", 00:30:36.285 "dma_device_type": 1 00:30:36.285 }, 00:30:36.285 { 00:30:36.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.285 "dma_device_type": 2 00:30:36.285 }, 00:30:36.285 { 00:30:36.285 "dma_device_id": "system", 00:30:36.285 "dma_device_type": 1 00:30:36.285 }, 00:30:36.285 { 00:30:36.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.285 "dma_device_type": 2 00:30:36.285 }, 00:30:36.285 { 00:30:36.285 "dma_device_id": "system", 00:30:36.285 "dma_device_type": 1 00:30:36.285 }, 00:30:36.285 { 00:30:36.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.285 "dma_device_type": 2 00:30:36.285 } 00:30:36.285 ], 00:30:36.285 "driver_specific": { 00:30:36.285 "raid": { 00:30:36.285 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:36.285 "strip_size_kb": 64, 00:30:36.285 "state": "online", 00:30:36.285 "raid_level": "raid0", 00:30:36.285 "superblock": true, 00:30:36.285 "num_base_bdevs": 3, 00:30:36.285 "num_base_bdevs_discovered": 3, 00:30:36.285 "num_base_bdevs_operational": 3, 00:30:36.285 "base_bdevs_list": [ 00:30:36.285 { 00:30:36.285 "name": "BaseBdev1", 00:30:36.285 "uuid": "236d5789-912c-45f3-87f9-13f184bd0cc2", 00:30:36.285 "is_configured": true, 00:30:36.285 "data_offset": 2048, 00:30:36.285 "data_size": 63488 00:30:36.286 }, 00:30:36.286 { 00:30:36.286 "name": "BaseBdev2", 00:30:36.286 "uuid": "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8", 00:30:36.286 "is_configured": true, 00:30:36.286 "data_offset": 2048, 00:30:36.286 "data_size": 63488 00:30:36.286 }, 00:30:36.286 { 00:30:36.286 "name": "BaseBdev3", 00:30:36.286 "uuid": "a6caec0a-e6bd-4fd2-bc88-049dbf793bb9", 00:30:36.286 "is_configured": true, 00:30:36.286 "data_offset": 2048, 00:30:36.286 "data_size": 63488 00:30:36.286 } 00:30:36.286 ] 00:30:36.286 } 00:30:36.286 } 00:30:36.286 }' 00:30:36.286 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:36.286 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:36.286 BaseBdev2 00:30:36.286 BaseBdev3' 00:30:36.286 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.544 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:36.545 13:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.545 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.545 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.545 13:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.545 [2024-10-09 13:59:43.018982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:36.545 [2024-10-09 13:59:43.019013] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:36.545 [2024-10-09 13:59:43.019081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.545 "name": "Existed_Raid", 00:30:36.545 "uuid": "1c18d284-3be1-4f49-b1d4-dcacde23fa61", 00:30:36.545 "strip_size_kb": 64, 00:30:36.545 "state": "offline", 00:30:36.545 "raid_level": "raid0", 00:30:36.545 "superblock": true, 00:30:36.545 "num_base_bdevs": 3, 00:30:36.545 "num_base_bdevs_discovered": 2, 00:30:36.545 "num_base_bdevs_operational": 2, 00:30:36.545 "base_bdevs_list": [ 00:30:36.545 { 00:30:36.545 "name": null, 00:30:36.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.545 "is_configured": false, 00:30:36.545 "data_offset": 0, 00:30:36.545 "data_size": 63488 00:30:36.545 }, 00:30:36.545 { 00:30:36.545 "name": "BaseBdev2", 00:30:36.545 "uuid": "6d3ae457-b1ef-4a8b-ace6-eb364689e3c8", 00:30:36.545 "is_configured": true, 00:30:36.545 "data_offset": 2048, 00:30:36.545 "data_size": 63488 00:30:36.545 }, 00:30:36.545 { 00:30:36.545 "name": "BaseBdev3", 00:30:36.545 "uuid": "a6caec0a-e6bd-4fd2-bc88-049dbf793bb9", 00:30:36.545 "is_configured": true, 00:30:36.545 "data_offset": 2048, 00:30:36.545 "data_size": 63488 00:30:36.545 } 00:30:36.545 ] 00:30:36.545 }' 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.545 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 [2024-10-09 13:59:43.519312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 [2024-10-09 13:59:43.583268] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:37.113 [2024-10-09 13:59:43.583319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.113 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.376 BaseBdev2 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.376 [ 00:30:37.376 { 00:30:37.376 "name": "BaseBdev2", 00:30:37.376 "aliases": [ 00:30:37.376 "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d" 00:30:37.376 ], 00:30:37.376 "product_name": "Malloc disk", 00:30:37.376 "block_size": 512, 00:30:37.376 "num_blocks": 65536, 00:30:37.376 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:37.376 "assigned_rate_limits": { 00:30:37.376 "rw_ios_per_sec": 0, 00:30:37.376 "rw_mbytes_per_sec": 0, 00:30:37.376 "r_mbytes_per_sec": 0, 00:30:37.376 "w_mbytes_per_sec": 0 00:30:37.376 }, 00:30:37.376 "claimed": false, 00:30:37.376 "zoned": false, 00:30:37.376 "supported_io_types": { 00:30:37.376 "read": true, 00:30:37.376 "write": true, 00:30:37.376 "unmap": true, 00:30:37.376 "flush": true, 00:30:37.376 "reset": true, 00:30:37.376 "nvme_admin": false, 00:30:37.376 "nvme_io": false, 00:30:37.376 "nvme_io_md": false, 00:30:37.376 "write_zeroes": true, 00:30:37.376 "zcopy": true, 00:30:37.376 "get_zone_info": false, 00:30:37.376 "zone_management": false, 00:30:37.376 "zone_append": false, 00:30:37.376 "compare": false, 00:30:37.376 "compare_and_write": false, 00:30:37.376 "abort": true, 00:30:37.376 "seek_hole": false, 00:30:37.376 "seek_data": false, 00:30:37.376 "copy": true, 00:30:37.376 "nvme_iov_md": false 00:30:37.376 }, 00:30:37.376 "memory_domains": [ 00:30:37.376 { 00:30:37.376 "dma_device_id": "system", 00:30:37.376 "dma_device_type": 1 00:30:37.376 }, 00:30:37.376 { 00:30:37.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:37.376 "dma_device_type": 2 00:30:37.376 } 00:30:37.376 ], 00:30:37.376 "driver_specific": {} 00:30:37.376 } 00:30:37.376 ] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.376 BaseBdev3 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:37.376 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.377 [ 00:30:37.377 { 00:30:37.377 "name": "BaseBdev3", 00:30:37.377 "aliases": [ 00:30:37.377 "181d0f06-02f9-4320-814e-4ffc3138b1bb" 00:30:37.377 ], 00:30:37.377 "product_name": "Malloc disk", 00:30:37.377 "block_size": 512, 00:30:37.377 "num_blocks": 65536, 00:30:37.377 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:37.377 "assigned_rate_limits": { 00:30:37.377 "rw_ios_per_sec": 0, 00:30:37.377 "rw_mbytes_per_sec": 0, 00:30:37.377 "r_mbytes_per_sec": 0, 00:30:37.377 "w_mbytes_per_sec": 0 00:30:37.377 }, 00:30:37.377 "claimed": false, 00:30:37.377 "zoned": false, 00:30:37.377 "supported_io_types": { 00:30:37.377 "read": true, 00:30:37.377 "write": true, 00:30:37.377 "unmap": true, 00:30:37.377 "flush": true, 00:30:37.377 "reset": true, 00:30:37.377 "nvme_admin": false, 00:30:37.377 "nvme_io": false, 00:30:37.377 "nvme_io_md": false, 00:30:37.377 "write_zeroes": true, 00:30:37.377 "zcopy": true, 00:30:37.377 "get_zone_info": false, 00:30:37.377 "zone_management": false, 00:30:37.377 "zone_append": false, 00:30:37.377 "compare": false, 00:30:37.377 "compare_and_write": false, 00:30:37.377 "abort": true, 00:30:37.377 "seek_hole": false, 00:30:37.377 "seek_data": false, 00:30:37.377 "copy": true, 00:30:37.377 "nvme_iov_md": false 00:30:37.377 }, 00:30:37.377 "memory_domains": [ 00:30:37.377 { 00:30:37.377 "dma_device_id": "system", 00:30:37.377 "dma_device_type": 1 00:30:37.377 }, 00:30:37.377 { 00:30:37.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:37.377 "dma_device_type": 2 00:30:37.377 } 00:30:37.377 ], 00:30:37.377 "driver_specific": {} 00:30:37.377 } 00:30:37.377 ] 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.377 [2024-10-09 13:59:43.756279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:37.377 [2024-10-09 13:59:43.756329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:37.377 [2024-10-09 13:59:43.756353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:37.377 [2024-10-09 13:59:43.758564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.377 "name": "Existed_Raid", 00:30:37.377 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:37.377 "strip_size_kb": 64, 00:30:37.377 "state": "configuring", 00:30:37.377 "raid_level": "raid0", 00:30:37.377 "superblock": true, 00:30:37.377 "num_base_bdevs": 3, 00:30:37.377 "num_base_bdevs_discovered": 2, 00:30:37.377 "num_base_bdevs_operational": 3, 00:30:37.377 "base_bdevs_list": [ 00:30:37.377 { 00:30:37.377 "name": "BaseBdev1", 00:30:37.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.377 "is_configured": false, 00:30:37.377 "data_offset": 0, 00:30:37.377 "data_size": 0 00:30:37.377 }, 00:30:37.377 { 00:30:37.377 "name": "BaseBdev2", 00:30:37.377 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:37.377 "is_configured": true, 00:30:37.377 "data_offset": 2048, 00:30:37.377 "data_size": 63488 00:30:37.377 }, 00:30:37.377 { 00:30:37.377 "name": "BaseBdev3", 00:30:37.377 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:37.377 "is_configured": true, 00:30:37.377 "data_offset": 2048, 00:30:37.377 "data_size": 63488 00:30:37.377 } 00:30:37.377 ] 00:30:37.377 }' 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.377 13:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.944 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:37.944 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 [2024-10-09 13:59:44.208375] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.945 "name": "Existed_Raid", 00:30:37.945 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:37.945 "strip_size_kb": 64, 00:30:37.945 "state": "configuring", 00:30:37.945 "raid_level": "raid0", 00:30:37.945 "superblock": true, 00:30:37.945 "num_base_bdevs": 3, 00:30:37.945 "num_base_bdevs_discovered": 1, 00:30:37.945 "num_base_bdevs_operational": 3, 00:30:37.945 "base_bdevs_list": [ 00:30:37.945 { 00:30:37.945 "name": "BaseBdev1", 00:30:37.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.945 "is_configured": false, 00:30:37.945 "data_offset": 0, 00:30:37.945 "data_size": 0 00:30:37.945 }, 00:30:37.945 { 00:30:37.945 "name": null, 00:30:37.945 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:37.945 "is_configured": false, 00:30:37.945 "data_offset": 0, 00:30:37.945 "data_size": 63488 00:30:37.945 }, 00:30:37.945 { 00:30:37.945 "name": "BaseBdev3", 00:30:37.945 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:37.945 "is_configured": true, 00:30:37.945 "data_offset": 2048, 00:30:37.945 "data_size": 63488 00:30:37.945 } 00:30:37.945 ] 00:30:37.945 }' 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.945 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.203 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.204 [2024-10-09 13:59:44.727458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.204 BaseBdev1 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.204 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.463 [ 00:30:38.463 { 00:30:38.463 "name": "BaseBdev1", 00:30:38.463 "aliases": [ 00:30:38.463 "40c76789-8878-44b1-9bc6-0e9a8d9a804f" 00:30:38.463 ], 00:30:38.463 "product_name": "Malloc disk", 00:30:38.463 "block_size": 512, 00:30:38.463 "num_blocks": 65536, 00:30:38.463 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:38.463 "assigned_rate_limits": { 00:30:38.463 "rw_ios_per_sec": 0, 00:30:38.463 "rw_mbytes_per_sec": 0, 00:30:38.463 "r_mbytes_per_sec": 0, 00:30:38.463 "w_mbytes_per_sec": 0 00:30:38.463 }, 00:30:38.463 "claimed": true, 00:30:38.463 "claim_type": "exclusive_write", 00:30:38.463 "zoned": false, 00:30:38.463 "supported_io_types": { 00:30:38.463 "read": true, 00:30:38.463 "write": true, 00:30:38.463 "unmap": true, 00:30:38.463 "flush": true, 00:30:38.463 "reset": true, 00:30:38.463 "nvme_admin": false, 00:30:38.463 "nvme_io": false, 00:30:38.463 "nvme_io_md": false, 00:30:38.463 "write_zeroes": true, 00:30:38.463 "zcopy": true, 00:30:38.463 "get_zone_info": false, 00:30:38.463 "zone_management": false, 00:30:38.463 "zone_append": false, 00:30:38.463 "compare": false, 00:30:38.463 "compare_and_write": false, 00:30:38.463 "abort": true, 00:30:38.463 "seek_hole": false, 00:30:38.463 "seek_data": false, 00:30:38.463 "copy": true, 00:30:38.463 "nvme_iov_md": false 00:30:38.463 }, 00:30:38.463 "memory_domains": [ 00:30:38.463 { 00:30:38.463 "dma_device_id": "system", 00:30:38.463 "dma_device_type": 1 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.463 "dma_device_type": 2 00:30:38.463 } 00:30:38.463 ], 00:30:38.463 "driver_specific": {} 00:30:38.463 } 00:30:38.463 ] 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.463 "name": "Existed_Raid", 00:30:38.463 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:38.463 "strip_size_kb": 64, 00:30:38.463 "state": "configuring", 00:30:38.463 "raid_level": "raid0", 00:30:38.463 "superblock": true, 00:30:38.463 "num_base_bdevs": 3, 00:30:38.463 "num_base_bdevs_discovered": 2, 00:30:38.463 "num_base_bdevs_operational": 3, 00:30:38.463 "base_bdevs_list": [ 00:30:38.463 { 00:30:38.463 "name": "BaseBdev1", 00:30:38.463 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:38.463 "is_configured": true, 00:30:38.463 "data_offset": 2048, 00:30:38.463 "data_size": 63488 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "name": null, 00:30:38.463 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:38.463 "is_configured": false, 00:30:38.463 "data_offset": 0, 00:30:38.463 "data_size": 63488 00:30:38.463 }, 00:30:38.463 { 00:30:38.463 "name": "BaseBdev3", 00:30:38.463 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:38.463 "is_configured": true, 00:30:38.463 "data_offset": 2048, 00:30:38.463 "data_size": 63488 00:30:38.463 } 00:30:38.463 ] 00:30:38.463 }' 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.463 13:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.721 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:38.721 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.721 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.721 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.980 [2024-10-09 13:59:45.311724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.980 "name": "Existed_Raid", 00:30:38.980 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:38.980 "strip_size_kb": 64, 00:30:38.980 "state": "configuring", 00:30:38.980 "raid_level": "raid0", 00:30:38.980 "superblock": true, 00:30:38.980 "num_base_bdevs": 3, 00:30:38.980 "num_base_bdevs_discovered": 1, 00:30:38.980 "num_base_bdevs_operational": 3, 00:30:38.980 "base_bdevs_list": [ 00:30:38.980 { 00:30:38.980 "name": "BaseBdev1", 00:30:38.980 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:38.980 "is_configured": true, 00:30:38.980 "data_offset": 2048, 00:30:38.980 "data_size": 63488 00:30:38.980 }, 00:30:38.980 { 00:30:38.980 "name": null, 00:30:38.980 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:38.980 "is_configured": false, 00:30:38.980 "data_offset": 0, 00:30:38.980 "data_size": 63488 00:30:38.980 }, 00:30:38.980 { 00:30:38.980 "name": null, 00:30:38.980 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:38.980 "is_configured": false, 00:30:38.980 "data_offset": 0, 00:30:38.980 "data_size": 63488 00:30:38.980 } 00:30:38.980 ] 00:30:38.980 }' 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.980 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.240 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.240 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:39.240 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.240 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.499 [2024-10-09 13:59:45.823931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.499 "name": "Existed_Raid", 00:30:39.499 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:39.499 "strip_size_kb": 64, 00:30:39.499 "state": "configuring", 00:30:39.499 "raid_level": "raid0", 00:30:39.499 "superblock": true, 00:30:39.499 "num_base_bdevs": 3, 00:30:39.499 "num_base_bdevs_discovered": 2, 00:30:39.499 "num_base_bdevs_operational": 3, 00:30:39.499 "base_bdevs_list": [ 00:30:39.499 { 00:30:39.499 "name": "BaseBdev1", 00:30:39.499 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:39.499 "is_configured": true, 00:30:39.499 "data_offset": 2048, 00:30:39.499 "data_size": 63488 00:30:39.499 }, 00:30:39.499 { 00:30:39.499 "name": null, 00:30:39.499 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:39.499 "is_configured": false, 00:30:39.499 "data_offset": 0, 00:30:39.499 "data_size": 63488 00:30:39.499 }, 00:30:39.499 { 00:30:39.499 "name": "BaseBdev3", 00:30:39.499 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:39.499 "is_configured": true, 00:30:39.499 "data_offset": 2048, 00:30:39.499 "data_size": 63488 00:30:39.499 } 00:30:39.499 ] 00:30:39.499 }' 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.499 13:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.757 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.757 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:39.757 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:39.757 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.757 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.016 [2024-10-09 13:59:46.332025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.016 "name": "Existed_Raid", 00:30:40.016 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:40.016 "strip_size_kb": 64, 00:30:40.016 "state": "configuring", 00:30:40.016 "raid_level": "raid0", 00:30:40.016 "superblock": true, 00:30:40.016 "num_base_bdevs": 3, 00:30:40.016 "num_base_bdevs_discovered": 1, 00:30:40.016 "num_base_bdevs_operational": 3, 00:30:40.016 "base_bdevs_list": [ 00:30:40.016 { 00:30:40.016 "name": null, 00:30:40.016 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:40.016 "is_configured": false, 00:30:40.016 "data_offset": 0, 00:30:40.016 "data_size": 63488 00:30:40.016 }, 00:30:40.016 { 00:30:40.016 "name": null, 00:30:40.016 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:40.016 "is_configured": false, 00:30:40.016 "data_offset": 0, 00:30:40.016 "data_size": 63488 00:30:40.016 }, 00:30:40.016 { 00:30:40.016 "name": "BaseBdev3", 00:30:40.016 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:40.016 "is_configured": true, 00:30:40.016 "data_offset": 2048, 00:30:40.016 "data_size": 63488 00:30:40.016 } 00:30:40.016 ] 00:30:40.016 }' 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.016 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.275 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.275 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:40.275 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.275 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.275 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.534 [2024-10-09 13:59:46.842908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.534 "name": "Existed_Raid", 00:30:40.534 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:40.534 "strip_size_kb": 64, 00:30:40.534 "state": "configuring", 00:30:40.534 "raid_level": "raid0", 00:30:40.534 "superblock": true, 00:30:40.534 "num_base_bdevs": 3, 00:30:40.534 "num_base_bdevs_discovered": 2, 00:30:40.534 "num_base_bdevs_operational": 3, 00:30:40.534 "base_bdevs_list": [ 00:30:40.534 { 00:30:40.534 "name": null, 00:30:40.534 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:40.534 "is_configured": false, 00:30:40.534 "data_offset": 0, 00:30:40.534 "data_size": 63488 00:30:40.534 }, 00:30:40.534 { 00:30:40.534 "name": "BaseBdev2", 00:30:40.534 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:40.534 "is_configured": true, 00:30:40.534 "data_offset": 2048, 00:30:40.534 "data_size": 63488 00:30:40.534 }, 00:30:40.534 { 00:30:40.534 "name": "BaseBdev3", 00:30:40.534 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:40.534 "is_configured": true, 00:30:40.534 "data_offset": 2048, 00:30:40.534 "data_size": 63488 00:30:40.534 } 00:30:40.534 ] 00:30:40.534 }' 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.534 13:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.793 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.793 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:40.794 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:40.794 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:40.794 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:40.794 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 40c76789-8878-44b1-9bc6-0e9a8d9a804f 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.053 [2024-10-09 13:59:47.390213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:41.053 [2024-10-09 13:59:47.390378] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:30:41.053 [2024-10-09 13:59:47.390397] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:41.053 NewBaseBdev 00:30:41.053 [2024-10-09 13:59:47.390718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:41.053 [2024-10-09 13:59:47.390855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:30:41.053 [2024-10-09 13:59:47.390866] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:30:41.053 [2024-10-09 13:59:47.390977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.053 [ 00:30:41.053 { 00:30:41.053 "name": "NewBaseBdev", 00:30:41.053 "aliases": [ 00:30:41.053 "40c76789-8878-44b1-9bc6-0e9a8d9a804f" 00:30:41.053 ], 00:30:41.053 "product_name": "Malloc disk", 00:30:41.053 "block_size": 512, 00:30:41.053 "num_blocks": 65536, 00:30:41.053 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:41.053 "assigned_rate_limits": { 00:30:41.053 "rw_ios_per_sec": 0, 00:30:41.053 "rw_mbytes_per_sec": 0, 00:30:41.053 "r_mbytes_per_sec": 0, 00:30:41.053 "w_mbytes_per_sec": 0 00:30:41.053 }, 00:30:41.053 "claimed": true, 00:30:41.053 "claim_type": "exclusive_write", 00:30:41.053 "zoned": false, 00:30:41.053 "supported_io_types": { 00:30:41.053 "read": true, 00:30:41.053 "write": true, 00:30:41.053 "unmap": true, 00:30:41.053 "flush": true, 00:30:41.053 "reset": true, 00:30:41.053 "nvme_admin": false, 00:30:41.053 "nvme_io": false, 00:30:41.053 "nvme_io_md": false, 00:30:41.053 "write_zeroes": true, 00:30:41.053 "zcopy": true, 00:30:41.053 "get_zone_info": false, 00:30:41.053 "zone_management": false, 00:30:41.053 "zone_append": false, 00:30:41.053 "compare": false, 00:30:41.053 "compare_and_write": false, 00:30:41.053 "abort": true, 00:30:41.053 "seek_hole": false, 00:30:41.053 "seek_data": false, 00:30:41.053 "copy": true, 00:30:41.053 "nvme_iov_md": false 00:30:41.053 }, 00:30:41.053 "memory_domains": [ 00:30:41.053 { 00:30:41.053 "dma_device_id": "system", 00:30:41.053 "dma_device_type": 1 00:30:41.053 }, 00:30:41.053 { 00:30:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.053 "dma_device_type": 2 00:30:41.053 } 00:30:41.053 ], 00:30:41.053 "driver_specific": {} 00:30:41.053 } 00:30:41.053 ] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.053 "name": "Existed_Raid", 00:30:41.053 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:41.053 "strip_size_kb": 64, 00:30:41.053 "state": "online", 00:30:41.053 "raid_level": "raid0", 00:30:41.053 "superblock": true, 00:30:41.053 "num_base_bdevs": 3, 00:30:41.053 "num_base_bdevs_discovered": 3, 00:30:41.053 "num_base_bdevs_operational": 3, 00:30:41.053 "base_bdevs_list": [ 00:30:41.053 { 00:30:41.053 "name": "NewBaseBdev", 00:30:41.053 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:41.053 "is_configured": true, 00:30:41.053 "data_offset": 2048, 00:30:41.053 "data_size": 63488 00:30:41.053 }, 00:30:41.053 { 00:30:41.053 "name": "BaseBdev2", 00:30:41.053 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:41.053 "is_configured": true, 00:30:41.053 "data_offset": 2048, 00:30:41.053 "data_size": 63488 00:30:41.053 }, 00:30:41.053 { 00:30:41.053 "name": "BaseBdev3", 00:30:41.053 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:41.053 "is_configured": true, 00:30:41.053 "data_offset": 2048, 00:30:41.053 "data_size": 63488 00:30:41.053 } 00:30:41.053 ] 00:30:41.053 }' 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.053 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.621 [2024-10-09 13:59:47.898762] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.621 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:41.621 "name": "Existed_Raid", 00:30:41.621 "aliases": [ 00:30:41.621 "c442b403-01cc-4cdd-bcbc-fa69ad590c37" 00:30:41.621 ], 00:30:41.621 "product_name": "Raid Volume", 00:30:41.621 "block_size": 512, 00:30:41.621 "num_blocks": 190464, 00:30:41.621 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:41.621 "assigned_rate_limits": { 00:30:41.621 "rw_ios_per_sec": 0, 00:30:41.621 "rw_mbytes_per_sec": 0, 00:30:41.621 "r_mbytes_per_sec": 0, 00:30:41.621 "w_mbytes_per_sec": 0 00:30:41.621 }, 00:30:41.621 "claimed": false, 00:30:41.621 "zoned": false, 00:30:41.621 "supported_io_types": { 00:30:41.621 "read": true, 00:30:41.621 "write": true, 00:30:41.621 "unmap": true, 00:30:41.621 "flush": true, 00:30:41.621 "reset": true, 00:30:41.621 "nvme_admin": false, 00:30:41.621 "nvme_io": false, 00:30:41.621 "nvme_io_md": false, 00:30:41.621 "write_zeroes": true, 00:30:41.621 "zcopy": false, 00:30:41.621 "get_zone_info": false, 00:30:41.621 "zone_management": false, 00:30:41.621 "zone_append": false, 00:30:41.621 "compare": false, 00:30:41.621 "compare_and_write": false, 00:30:41.621 "abort": false, 00:30:41.621 "seek_hole": false, 00:30:41.621 "seek_data": false, 00:30:41.621 "copy": false, 00:30:41.621 "nvme_iov_md": false 00:30:41.621 }, 00:30:41.621 "memory_domains": [ 00:30:41.621 { 00:30:41.621 "dma_device_id": "system", 00:30:41.621 "dma_device_type": 1 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.621 "dma_device_type": 2 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "dma_device_id": "system", 00:30:41.621 "dma_device_type": 1 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.621 "dma_device_type": 2 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "dma_device_id": "system", 00:30:41.621 "dma_device_type": 1 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.621 "dma_device_type": 2 00:30:41.621 } 00:30:41.621 ], 00:30:41.621 "driver_specific": { 00:30:41.621 "raid": { 00:30:41.621 "uuid": "c442b403-01cc-4cdd-bcbc-fa69ad590c37", 00:30:41.621 "strip_size_kb": 64, 00:30:41.621 "state": "online", 00:30:41.621 "raid_level": "raid0", 00:30:41.621 "superblock": true, 00:30:41.621 "num_base_bdevs": 3, 00:30:41.621 "num_base_bdevs_discovered": 3, 00:30:41.621 "num_base_bdevs_operational": 3, 00:30:41.621 "base_bdevs_list": [ 00:30:41.621 { 00:30:41.621 "name": "NewBaseBdev", 00:30:41.621 "uuid": "40c76789-8878-44b1-9bc6-0e9a8d9a804f", 00:30:41.621 "is_configured": true, 00:30:41.621 "data_offset": 2048, 00:30:41.621 "data_size": 63488 00:30:41.621 }, 00:30:41.621 { 00:30:41.621 "name": "BaseBdev2", 00:30:41.621 "uuid": "10bc4ace-4ad8-4d13-ac2a-f5705ba7b23d", 00:30:41.621 "is_configured": true, 00:30:41.622 "data_offset": 2048, 00:30:41.622 "data_size": 63488 00:30:41.622 }, 00:30:41.622 { 00:30:41.622 "name": "BaseBdev3", 00:30:41.622 "uuid": "181d0f06-02f9-4320-814e-4ffc3138b1bb", 00:30:41.622 "is_configured": true, 00:30:41.622 "data_offset": 2048, 00:30:41.622 "data_size": 63488 00:30:41.622 } 00:30:41.622 ] 00:30:41.622 } 00:30:41.622 } 00:30:41.622 }' 00:30:41.622 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:41.622 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:41.622 BaseBdev2 00:30:41.622 BaseBdev3' 00:30:41.622 13:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.622 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.881 [2024-10-09 13:59:48.186454] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:41.881 [2024-10-09 13:59:48.186486] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:41.881 [2024-10-09 13:59:48.186591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:41.881 [2024-10-09 13:59:48.186660] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:41.881 [2024-10-09 13:59:48.186678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75953 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75953 ']' 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75953 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75953 00:30:41.881 killing process with pid 75953 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75953' 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75953 00:30:41.881 [2024-10-09 13:59:48.229560] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:41.881 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75953 00:30:41.881 [2024-10-09 13:59:48.262195] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:42.140 ************************************ 00:30:42.140 END TEST raid_state_function_test_sb 00:30:42.140 ************************************ 00:30:42.140 13:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:42.140 00:30:42.140 real 0m9.458s 00:30:42.140 user 0m16.329s 00:30:42.140 sys 0m1.922s 00:30:42.140 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.140 13:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.140 13:59:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:30:42.140 13:59:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:42.140 13:59:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:42.140 13:59:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:42.140 ************************************ 00:30:42.140 START TEST raid_superblock_test 00:30:42.140 ************************************ 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76568 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76568 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76568 ']' 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.140 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:42.141 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.141 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.141 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:42.141 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.141 [2024-10-09 13:59:48.678439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:42.141 [2024-10-09 13:59:48.678670] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76568 ] 00:30:42.400 [2024-10-09 13:59:48.857970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.400 [2024-10-09 13:59:48.905505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.659 [2024-10-09 13:59:48.949369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:42.659 [2024-10-09 13:59:48.949408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.659 13:59:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.659 malloc1 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.659 [2024-10-09 13:59:49.021878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:42.659 [2024-10-09 13:59:49.022092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:42.659 [2024-10-09 13:59:49.022160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:42.659 [2024-10-09 13:59:49.022265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:42.659 [2024-10-09 13:59:49.024884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:42.659 [2024-10-09 13:59:49.025037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:42.659 pt1 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.659 malloc2 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.659 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.659 [2024-10-09 13:59:49.068196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:42.660 [2024-10-09 13:59:49.068406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:42.660 [2024-10-09 13:59:49.068438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:42.660 [2024-10-09 13:59:49.068453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:42.660 [2024-10-09 13:59:49.071152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:42.660 [2024-10-09 13:59:49.071197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:42.660 pt2 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.660 malloc3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.660 [2024-10-09 13:59:49.097701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:42.660 [2024-10-09 13:59:49.097772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:42.660 [2024-10-09 13:59:49.097797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:42.660 [2024-10-09 13:59:49.097812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:42.660 [2024-10-09 13:59:49.100412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:42.660 [2024-10-09 13:59:49.100455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:42.660 pt3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.660 [2024-10-09 13:59:49.109742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:42.660 [2024-10-09 13:59:49.112109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:42.660 [2024-10-09 13:59:49.112195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:42.660 [2024-10-09 13:59:49.112350] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:30:42.660 [2024-10-09 13:59:49.112362] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:42.660 [2024-10-09 13:59:49.112658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:42.660 [2024-10-09 13:59:49.112802] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:30:42.660 [2024-10-09 13:59:49.112820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:30:42.660 [2024-10-09 13:59:49.112957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:42.660 "name": "raid_bdev1", 00:30:42.660 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:42.660 "strip_size_kb": 64, 00:30:42.660 "state": "online", 00:30:42.660 "raid_level": "raid0", 00:30:42.660 "superblock": true, 00:30:42.660 "num_base_bdevs": 3, 00:30:42.660 "num_base_bdevs_discovered": 3, 00:30:42.660 "num_base_bdevs_operational": 3, 00:30:42.660 "base_bdevs_list": [ 00:30:42.660 { 00:30:42.660 "name": "pt1", 00:30:42.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:42.660 "is_configured": true, 00:30:42.660 "data_offset": 2048, 00:30:42.660 "data_size": 63488 00:30:42.660 }, 00:30:42.660 { 00:30:42.660 "name": "pt2", 00:30:42.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:42.660 "is_configured": true, 00:30:42.660 "data_offset": 2048, 00:30:42.660 "data_size": 63488 00:30:42.660 }, 00:30:42.660 { 00:30:42.660 "name": "pt3", 00:30:42.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:42.660 "is_configured": true, 00:30:42.660 "data_offset": 2048, 00:30:42.660 "data_size": 63488 00:30:42.660 } 00:30:42.660 ] 00:30:42.660 }' 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:42.660 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:43.228 [2024-10-09 13:59:49.542157] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.228 "name": "raid_bdev1", 00:30:43.228 "aliases": [ 00:30:43.228 "920789a5-ac82-42d2-ad23-ac4b8f28b8d2" 00:30:43.228 ], 00:30:43.228 "product_name": "Raid Volume", 00:30:43.228 "block_size": 512, 00:30:43.228 "num_blocks": 190464, 00:30:43.228 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:43.228 "assigned_rate_limits": { 00:30:43.228 "rw_ios_per_sec": 0, 00:30:43.228 "rw_mbytes_per_sec": 0, 00:30:43.228 "r_mbytes_per_sec": 0, 00:30:43.228 "w_mbytes_per_sec": 0 00:30:43.228 }, 00:30:43.228 "claimed": false, 00:30:43.228 "zoned": false, 00:30:43.228 "supported_io_types": { 00:30:43.228 "read": true, 00:30:43.228 "write": true, 00:30:43.228 "unmap": true, 00:30:43.228 "flush": true, 00:30:43.228 "reset": true, 00:30:43.228 "nvme_admin": false, 00:30:43.228 "nvme_io": false, 00:30:43.228 "nvme_io_md": false, 00:30:43.228 "write_zeroes": true, 00:30:43.228 "zcopy": false, 00:30:43.228 "get_zone_info": false, 00:30:43.228 "zone_management": false, 00:30:43.228 "zone_append": false, 00:30:43.228 "compare": false, 00:30:43.228 "compare_and_write": false, 00:30:43.228 "abort": false, 00:30:43.228 "seek_hole": false, 00:30:43.228 "seek_data": false, 00:30:43.228 "copy": false, 00:30:43.228 "nvme_iov_md": false 00:30:43.228 }, 00:30:43.228 "memory_domains": [ 00:30:43.228 { 00:30:43.228 "dma_device_id": "system", 00:30:43.228 "dma_device_type": 1 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.228 "dma_device_type": 2 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "dma_device_id": "system", 00:30:43.228 "dma_device_type": 1 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.228 "dma_device_type": 2 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "dma_device_id": "system", 00:30:43.228 "dma_device_type": 1 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.228 "dma_device_type": 2 00:30:43.228 } 00:30:43.228 ], 00:30:43.228 "driver_specific": { 00:30:43.228 "raid": { 00:30:43.228 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:43.228 "strip_size_kb": 64, 00:30:43.228 "state": "online", 00:30:43.228 "raid_level": "raid0", 00:30:43.228 "superblock": true, 00:30:43.228 "num_base_bdevs": 3, 00:30:43.228 "num_base_bdevs_discovered": 3, 00:30:43.228 "num_base_bdevs_operational": 3, 00:30:43.228 "base_bdevs_list": [ 00:30:43.228 { 00:30:43.228 "name": "pt1", 00:30:43.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:43.228 "is_configured": true, 00:30:43.228 "data_offset": 2048, 00:30:43.228 "data_size": 63488 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "name": "pt2", 00:30:43.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:43.228 "is_configured": true, 00:30:43.228 "data_offset": 2048, 00:30:43.228 "data_size": 63488 00:30:43.228 }, 00:30:43.228 { 00:30:43.228 "name": "pt3", 00:30:43.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:43.228 "is_configured": true, 00:30:43.228 "data_offset": 2048, 00:30:43.228 "data_size": 63488 00:30:43.228 } 00:30:43.228 ] 00:30:43.228 } 00:30:43.228 } 00:30:43.228 }' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:43.228 pt2 00:30:43.228 pt3' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.228 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:43.229 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:43.488 [2024-10-09 13:59:49.818142] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=920789a5-ac82-42d2-ad23-ac4b8f28b8d2 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 920789a5-ac82-42d2-ad23-ac4b8f28b8d2 ']' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 [2024-10-09 13:59:49.865830] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:43.488 [2024-10-09 13:59:49.865861] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:43.488 [2024-10-09 13:59:49.865940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:43.488 [2024-10-09 13:59:49.865999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:43.488 [2024-10-09 13:59:49.866021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 [2024-10-09 13:59:50.001921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:43.488 [2024-10-09 13:59:50.004433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:43.488 [2024-10-09 13:59:50.004484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:43.488 [2024-10-09 13:59:50.004540] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:43.488 [2024-10-09 13:59:50.004605] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:43.488 [2024-10-09 13:59:50.004631] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:43.488 [2024-10-09 13:59:50.004650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:43.488 [2024-10-09 13:59:50.004664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:30:43.488 request: 00:30:43.488 { 00:30:43.488 "name": "raid_bdev1", 00:30:43.488 "raid_level": "raid0", 00:30:43.488 "base_bdevs": [ 00:30:43.488 "malloc1", 00:30:43.488 "malloc2", 00:30:43.488 "malloc3" 00:30:43.488 ], 00:30:43.488 "strip_size_kb": 64, 00:30:43.488 "superblock": false, 00:30:43.488 "method": "bdev_raid_create", 00:30:43.488 "req_id": 1 00:30:43.488 } 00:30:43.488 Got JSON-RPC error response 00:30:43.488 response: 00:30:43.488 { 00:30:43.488 "code": -17, 00:30:43.488 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:43.488 } 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.488 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.747 [2024-10-09 13:59:50.061919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:43.747 [2024-10-09 13:59:50.062131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:43.747 [2024-10-09 13:59:50.062195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:43.747 [2024-10-09 13:59:50.062281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:43.747 [2024-10-09 13:59:50.065106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:43.747 [2024-10-09 13:59:50.065255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:43.747 [2024-10-09 13:59:50.065430] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:43.747 [2024-10-09 13:59:50.065521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:43.747 pt1 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:43.747 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:43.748 "name": "raid_bdev1", 00:30:43.748 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:43.748 "strip_size_kb": 64, 00:30:43.748 "state": "configuring", 00:30:43.748 "raid_level": "raid0", 00:30:43.748 "superblock": true, 00:30:43.748 "num_base_bdevs": 3, 00:30:43.748 "num_base_bdevs_discovered": 1, 00:30:43.748 "num_base_bdevs_operational": 3, 00:30:43.748 "base_bdevs_list": [ 00:30:43.748 { 00:30:43.748 "name": "pt1", 00:30:43.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:43.748 "is_configured": true, 00:30:43.748 "data_offset": 2048, 00:30:43.748 "data_size": 63488 00:30:43.748 }, 00:30:43.748 { 00:30:43.748 "name": null, 00:30:43.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:43.748 "is_configured": false, 00:30:43.748 "data_offset": 2048, 00:30:43.748 "data_size": 63488 00:30:43.748 }, 00:30:43.748 { 00:30:43.748 "name": null, 00:30:43.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:43.748 "is_configured": false, 00:30:43.748 "data_offset": 2048, 00:30:43.748 "data_size": 63488 00:30:43.748 } 00:30:43.748 ] 00:30:43.748 }' 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:43.748 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.006 [2024-10-09 13:59:50.538031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:44.006 [2024-10-09 13:59:50.538101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.006 [2024-10-09 13:59:50.538126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:30:44.006 [2024-10-09 13:59:50.538143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.006 [2024-10-09 13:59:50.538586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.006 [2024-10-09 13:59:50.538615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:44.006 [2024-10-09 13:59:50.538696] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:44.006 [2024-10-09 13:59:50.538726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:44.006 pt2 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.006 [2024-10-09 13:59:50.546023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:44.006 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.007 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.265 "name": "raid_bdev1", 00:30:44.265 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:44.265 "strip_size_kb": 64, 00:30:44.265 "state": "configuring", 00:30:44.265 "raid_level": "raid0", 00:30:44.265 "superblock": true, 00:30:44.265 "num_base_bdevs": 3, 00:30:44.265 "num_base_bdevs_discovered": 1, 00:30:44.265 "num_base_bdevs_operational": 3, 00:30:44.265 "base_bdevs_list": [ 00:30:44.265 { 00:30:44.265 "name": "pt1", 00:30:44.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:44.265 "is_configured": true, 00:30:44.265 "data_offset": 2048, 00:30:44.265 "data_size": 63488 00:30:44.265 }, 00:30:44.265 { 00:30:44.265 "name": null, 00:30:44.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:44.265 "is_configured": false, 00:30:44.265 "data_offset": 0, 00:30:44.265 "data_size": 63488 00:30:44.265 }, 00:30:44.265 { 00:30:44.265 "name": null, 00:30:44.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:44.265 "is_configured": false, 00:30:44.265 "data_offset": 2048, 00:30:44.265 "data_size": 63488 00:30:44.265 } 00:30:44.265 ] 00:30:44.265 }' 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.265 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.525 [2024-10-09 13:59:50.986124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:44.525 [2024-10-09 13:59:50.986336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.525 [2024-10-09 13:59:50.986373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:30:44.525 [2024-10-09 13:59:50.986386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.525 [2024-10-09 13:59:50.986858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.525 [2024-10-09 13:59:50.986879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:44.525 [2024-10-09 13:59:50.986959] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:44.525 [2024-10-09 13:59:50.986982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:44.525 pt2 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.525 [2024-10-09 13:59:50.994087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:44.525 [2024-10-09 13:59:50.994148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.525 [2024-10-09 13:59:50.994172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:44.525 [2024-10-09 13:59:50.994184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.525 [2024-10-09 13:59:50.994583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.525 [2024-10-09 13:59:50.994615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:44.525 [2024-10-09 13:59:50.994685] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:44.525 [2024-10-09 13:59:50.994707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:44.525 [2024-10-09 13:59:50.994808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:44.525 [2024-10-09 13:59:50.994819] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:44.525 [2024-10-09 13:59:50.995068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:30:44.525 [2024-10-09 13:59:50.995172] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:44.525 [2024-10-09 13:59:50.995184] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:30:44.525 [2024-10-09 13:59:50.995281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:44.525 pt3 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:44.525 13:59:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:44.525 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:44.525 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.525 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.526 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.526 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.526 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:44.526 "name": "raid_bdev1", 00:30:44.526 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:44.526 "strip_size_kb": 64, 00:30:44.526 "state": "online", 00:30:44.526 "raid_level": "raid0", 00:30:44.526 "superblock": true, 00:30:44.526 "num_base_bdevs": 3, 00:30:44.526 "num_base_bdevs_discovered": 3, 00:30:44.526 "num_base_bdevs_operational": 3, 00:30:44.526 "base_bdevs_list": [ 00:30:44.526 { 00:30:44.526 "name": "pt1", 00:30:44.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:44.526 "is_configured": true, 00:30:44.526 "data_offset": 2048, 00:30:44.526 "data_size": 63488 00:30:44.526 }, 00:30:44.526 { 00:30:44.526 "name": "pt2", 00:30:44.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:44.526 "is_configured": true, 00:30:44.526 "data_offset": 2048, 00:30:44.526 "data_size": 63488 00:30:44.526 }, 00:30:44.526 { 00:30:44.526 "name": "pt3", 00:30:44.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:44.526 "is_configured": true, 00:30:44.526 "data_offset": 2048, 00:30:44.526 "data_size": 63488 00:30:44.526 } 00:30:44.526 ] 00:30:44.526 }' 00:30:44.526 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:44.526 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.093 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:45.093 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:45.093 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.094 [2024-10-09 13:59:51.470503] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.094 "name": "raid_bdev1", 00:30:45.094 "aliases": [ 00:30:45.094 "920789a5-ac82-42d2-ad23-ac4b8f28b8d2" 00:30:45.094 ], 00:30:45.094 "product_name": "Raid Volume", 00:30:45.094 "block_size": 512, 00:30:45.094 "num_blocks": 190464, 00:30:45.094 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:45.094 "assigned_rate_limits": { 00:30:45.094 "rw_ios_per_sec": 0, 00:30:45.094 "rw_mbytes_per_sec": 0, 00:30:45.094 "r_mbytes_per_sec": 0, 00:30:45.094 "w_mbytes_per_sec": 0 00:30:45.094 }, 00:30:45.094 "claimed": false, 00:30:45.094 "zoned": false, 00:30:45.094 "supported_io_types": { 00:30:45.094 "read": true, 00:30:45.094 "write": true, 00:30:45.094 "unmap": true, 00:30:45.094 "flush": true, 00:30:45.094 "reset": true, 00:30:45.094 "nvme_admin": false, 00:30:45.094 "nvme_io": false, 00:30:45.094 "nvme_io_md": false, 00:30:45.094 "write_zeroes": true, 00:30:45.094 "zcopy": false, 00:30:45.094 "get_zone_info": false, 00:30:45.094 "zone_management": false, 00:30:45.094 "zone_append": false, 00:30:45.094 "compare": false, 00:30:45.094 "compare_and_write": false, 00:30:45.094 "abort": false, 00:30:45.094 "seek_hole": false, 00:30:45.094 "seek_data": false, 00:30:45.094 "copy": false, 00:30:45.094 "nvme_iov_md": false 00:30:45.094 }, 00:30:45.094 "memory_domains": [ 00:30:45.094 { 00:30:45.094 "dma_device_id": "system", 00:30:45.094 "dma_device_type": 1 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.094 "dma_device_type": 2 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "dma_device_id": "system", 00:30:45.094 "dma_device_type": 1 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.094 "dma_device_type": 2 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "dma_device_id": "system", 00:30:45.094 "dma_device_type": 1 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.094 "dma_device_type": 2 00:30:45.094 } 00:30:45.094 ], 00:30:45.094 "driver_specific": { 00:30:45.094 "raid": { 00:30:45.094 "uuid": "920789a5-ac82-42d2-ad23-ac4b8f28b8d2", 00:30:45.094 "strip_size_kb": 64, 00:30:45.094 "state": "online", 00:30:45.094 "raid_level": "raid0", 00:30:45.094 "superblock": true, 00:30:45.094 "num_base_bdevs": 3, 00:30:45.094 "num_base_bdevs_discovered": 3, 00:30:45.094 "num_base_bdevs_operational": 3, 00:30:45.094 "base_bdevs_list": [ 00:30:45.094 { 00:30:45.094 "name": "pt1", 00:30:45.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:45.094 "is_configured": true, 00:30:45.094 "data_offset": 2048, 00:30:45.094 "data_size": 63488 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "name": "pt2", 00:30:45.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:45.094 "is_configured": true, 00:30:45.094 "data_offset": 2048, 00:30:45.094 "data_size": 63488 00:30:45.094 }, 00:30:45.094 { 00:30:45.094 "name": "pt3", 00:30:45.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:45.094 "is_configured": true, 00:30:45.094 "data_offset": 2048, 00:30:45.094 "data_size": 63488 00:30:45.094 } 00:30:45.094 ] 00:30:45.094 } 00:30:45.094 } 00:30:45.094 }' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:45.094 pt2 00:30:45.094 pt3' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.094 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:45.353 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.354 [2024-10-09 13:59:51.754625] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 920789a5-ac82-42d2-ad23-ac4b8f28b8d2 '!=' 920789a5-ac82-42d2-ad23-ac4b8f28b8d2 ']' 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76568 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76568 ']' 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76568 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76568 00:30:45.354 killing process with pid 76568 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76568' 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76568 00:30:45.354 [2024-10-09 13:59:51.832983] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:45.354 [2024-10-09 13:59:51.833072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:45.354 13:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76568 00:30:45.354 [2024-10-09 13:59:51.833139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:45.354 [2024-10-09 13:59:51.833150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:30:45.354 [2024-10-09 13:59:51.870125] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:45.613 13:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:45.613 00:30:45.613 real 0m3.556s 00:30:45.613 user 0m5.896s 00:30:45.613 sys 0m0.883s 00:30:45.613 13:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.613 ************************************ 00:30:45.613 END TEST raid_superblock_test 00:30:45.613 ************************************ 00:30:45.613 13:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.871 13:59:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:30:45.871 13:59:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:45.871 13:59:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.871 13:59:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:45.871 ************************************ 00:30:45.871 START TEST raid_read_error_test 00:30:45.871 ************************************ 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:45.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yfAhhKnNI9 00:30:45.871 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76797 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76797 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76797 ']' 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.872 13:59:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:45.872 [2024-10-09 13:59:52.321616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:45.872 [2024-10-09 13:59:52.321894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76797 ] 00:30:46.130 [2024-10-09 13:59:52.503048] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.130 [2024-10-09 13:59:52.549001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.130 [2024-10-09 13:59:52.593795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:46.130 [2024-10-09 13:59:52.593837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 BaseBdev1_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 true 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 [2024-10-09 13:59:53.286406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:47.067 [2024-10-09 13:59:53.286468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.067 [2024-10-09 13:59:53.286492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:47.067 [2024-10-09 13:59:53.286504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.067 [2024-10-09 13:59:53.289051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.067 [2024-10-09 13:59:53.289096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:47.067 BaseBdev1 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 BaseBdev2_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 true 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 [2024-10-09 13:59:53.335886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:47.067 [2024-10-09 13:59:53.335940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.067 [2024-10-09 13:59:53.335963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:47.067 [2024-10-09 13:59:53.335975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.067 [2024-10-09 13:59:53.338474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.067 [2024-10-09 13:59:53.338667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:47.067 BaseBdev2 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 BaseBdev3_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 true 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 [2024-10-09 13:59:53.365051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:47.067 [2024-10-09 13:59:53.365208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:47.067 [2024-10-09 13:59:53.365239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:47.067 [2024-10-09 13:59:53.365251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:47.067 [2024-10-09 13:59:53.367692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:47.067 [2024-10-09 13:59:53.367731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:47.067 BaseBdev3 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.067 [2024-10-09 13:59:53.373122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:47.067 [2024-10-09 13:59:53.375302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:47.067 [2024-10-09 13:59:53.375384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:47.067 [2024-10-09 13:59:53.375573] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:30:47.067 [2024-10-09 13:59:53.375589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:47.067 [2024-10-09 13:59:53.375871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:47.067 [2024-10-09 13:59:53.375995] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:30:47.067 [2024-10-09 13:59:53.376007] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:30:47.067 [2024-10-09 13:59:53.376135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:47.067 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.068 "name": "raid_bdev1", 00:30:47.068 "uuid": "e0d1b416-abf0-4398-a6c7-f7b773600d54", 00:30:47.068 "strip_size_kb": 64, 00:30:47.068 "state": "online", 00:30:47.068 "raid_level": "raid0", 00:30:47.068 "superblock": true, 00:30:47.068 "num_base_bdevs": 3, 00:30:47.068 "num_base_bdevs_discovered": 3, 00:30:47.068 "num_base_bdevs_operational": 3, 00:30:47.068 "base_bdevs_list": [ 00:30:47.068 { 00:30:47.068 "name": "BaseBdev1", 00:30:47.068 "uuid": "e7eaccb7-123c-5e3b-b271-4b25e7ba425d", 00:30:47.068 "is_configured": true, 00:30:47.068 "data_offset": 2048, 00:30:47.068 "data_size": 63488 00:30:47.068 }, 00:30:47.068 { 00:30:47.068 "name": "BaseBdev2", 00:30:47.068 "uuid": "07104ded-0279-55b4-abca-1b3b99973254", 00:30:47.068 "is_configured": true, 00:30:47.068 "data_offset": 2048, 00:30:47.068 "data_size": 63488 00:30:47.068 }, 00:30:47.068 { 00:30:47.068 "name": "BaseBdev3", 00:30:47.068 "uuid": "5e460825-de1b-5d33-9077-f1a8b3db6b51", 00:30:47.068 "is_configured": true, 00:30:47.068 "data_offset": 2048, 00:30:47.068 "data_size": 63488 00:30:47.068 } 00:30:47.068 ] 00:30:47.068 }' 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.068 13:59:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:47.326 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:47.326 13:59:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:47.586 [2024-10-09 13:59:53.989654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.521 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:48.521 "name": "raid_bdev1", 00:30:48.521 "uuid": "e0d1b416-abf0-4398-a6c7-f7b773600d54", 00:30:48.521 "strip_size_kb": 64, 00:30:48.521 "state": "online", 00:30:48.521 "raid_level": "raid0", 00:30:48.521 "superblock": true, 00:30:48.521 "num_base_bdevs": 3, 00:30:48.521 "num_base_bdevs_discovered": 3, 00:30:48.521 "num_base_bdevs_operational": 3, 00:30:48.521 "base_bdevs_list": [ 00:30:48.521 { 00:30:48.521 "name": "BaseBdev1", 00:30:48.521 "uuid": "e7eaccb7-123c-5e3b-b271-4b25e7ba425d", 00:30:48.521 "is_configured": true, 00:30:48.521 "data_offset": 2048, 00:30:48.521 "data_size": 63488 00:30:48.521 }, 00:30:48.521 { 00:30:48.521 "name": "BaseBdev2", 00:30:48.521 "uuid": "07104ded-0279-55b4-abca-1b3b99973254", 00:30:48.521 "is_configured": true, 00:30:48.522 "data_offset": 2048, 00:30:48.522 "data_size": 63488 00:30:48.522 }, 00:30:48.522 { 00:30:48.522 "name": "BaseBdev3", 00:30:48.522 "uuid": "5e460825-de1b-5d33-9077-f1a8b3db6b51", 00:30:48.522 "is_configured": true, 00:30:48.522 "data_offset": 2048, 00:30:48.522 "data_size": 63488 00:30:48.522 } 00:30:48.522 ] 00:30:48.522 }' 00:30:48.522 13:59:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:48.522 13:59:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.101 [2024-10-09 13:59:55.356901] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:49.101 [2024-10-09 13:59:55.356935] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:49.101 [2024-10-09 13:59:55.359798] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:49.101 [2024-10-09 13:59:55.359855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.101 [2024-10-09 13:59:55.359910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:49.101 [2024-10-09 13:59:55.359926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:30:49.101 { 00:30:49.101 "results": [ 00:30:49.101 { 00:30:49.101 "job": "raid_bdev1", 00:30:49.101 "core_mask": "0x1", 00:30:49.101 "workload": "randrw", 00:30:49.101 "percentage": 50, 00:30:49.101 "status": "finished", 00:30:49.101 "queue_depth": 1, 00:30:49.101 "io_size": 131072, 00:30:49.101 "runtime": 1.36466, 00:30:49.101 "iops": 15620.007914059179, 00:30:49.101 "mibps": 1952.5009892573974, 00:30:49.101 "io_failed": 1, 00:30:49.101 "io_timeout": 0, 00:30:49.101 "avg_latency_us": 88.42480595634605, 00:30:49.101 "min_latency_us": 27.55047619047619, 00:30:49.101 "max_latency_us": 1583.7866666666666 00:30:49.101 } 00:30:49.101 ], 00:30:49.101 "core_count": 1 00:30:49.101 } 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76797 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76797 ']' 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76797 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76797 00:30:49.101 killing process with pid 76797 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76797' 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76797 00:30:49.101 [2024-10-09 13:59:55.409329] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:49.101 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76797 00:30:49.101 [2024-10-09 13:59:55.435371] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:49.409 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yfAhhKnNI9 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:30:49.410 00:30:49.410 real 0m3.507s 00:30:49.410 user 0m4.538s 00:30:49.410 sys 0m0.600s 00:30:49.410 ************************************ 00:30:49.410 END TEST raid_read_error_test 00:30:49.410 ************************************ 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.410 13:59:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.410 13:59:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:30:49.410 13:59:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:49.410 13:59:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.410 13:59:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:49.410 ************************************ 00:30:49.410 START TEST raid_write_error_test 00:30:49.410 ************************************ 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P5H8ljkePh 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76932 00:30:49.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76932 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76932 ']' 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:49.410 13:59:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:49.410 [2024-10-09 13:59:55.878157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:49.410 [2024-10-09 13:59:55.878347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76932 ] 00:30:49.668 [2024-10-09 13:59:56.056750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.668 [2024-10-09 13:59:56.103740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.668 [2024-10-09 13:59:56.147477] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:49.668 [2024-10-09 13:59:56.147525] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 BaseBdev1_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 true 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 [2024-10-09 13:59:56.864057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:50.605 [2024-10-09 13:59:56.864241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:50.605 [2024-10-09 13:59:56.864280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:50.605 [2024-10-09 13:59:56.864293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:50.605 [2024-10-09 13:59:56.866973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:50.605 [2024-10-09 13:59:56.867013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:50.605 BaseBdev1 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 BaseBdev2_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 true 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 [2024-10-09 13:59:56.912545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:50.605 [2024-10-09 13:59:56.912613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:50.605 [2024-10-09 13:59:56.912636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:50.605 [2024-10-09 13:59:56.912647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:50.605 [2024-10-09 13:59:56.915376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:50.605 [2024-10-09 13:59:56.915420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:50.605 BaseBdev2 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 BaseBdev3_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 true 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 [2024-10-09 13:59:56.941945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:50.605 [2024-10-09 13:59:56.941996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:50.605 [2024-10-09 13:59:56.942035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:50.605 [2024-10-09 13:59:56.942049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:50.605 [2024-10-09 13:59:56.944744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:50.605 [2024-10-09 13:59:56.944785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:50.605 BaseBdev3 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 [2024-10-09 13:59:56.950002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:50.605 [2024-10-09 13:59:56.952392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:50.605 [2024-10-09 13:59:56.952477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:50.605 [2024-10-09 13:59:56.952689] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:30:50.605 [2024-10-09 13:59:56.952712] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:30:50.605 [2024-10-09 13:59:56.953010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:50.605 [2024-10-09 13:59:56.953156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:30:50.605 [2024-10-09 13:59:56.953168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:30:50.605 [2024-10-09 13:59:56.953296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.605 13:59:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:50.605 13:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.605 "name": "raid_bdev1", 00:30:50.605 "uuid": "787d7df6-47c0-434c-ac70-16e8103e54b7", 00:30:50.605 "strip_size_kb": 64, 00:30:50.605 "state": "online", 00:30:50.605 "raid_level": "raid0", 00:30:50.605 "superblock": true, 00:30:50.605 "num_base_bdevs": 3, 00:30:50.605 "num_base_bdevs_discovered": 3, 00:30:50.605 "num_base_bdevs_operational": 3, 00:30:50.605 "base_bdevs_list": [ 00:30:50.605 { 00:30:50.605 "name": "BaseBdev1", 00:30:50.605 "uuid": "5a835be1-b84c-523e-b89f-fef4803c58b5", 00:30:50.605 "is_configured": true, 00:30:50.605 "data_offset": 2048, 00:30:50.605 "data_size": 63488 00:30:50.605 }, 00:30:50.605 { 00:30:50.605 "name": "BaseBdev2", 00:30:50.605 "uuid": "3033c5d4-b19c-5096-a981-7429f27c6363", 00:30:50.605 "is_configured": true, 00:30:50.605 "data_offset": 2048, 00:30:50.605 "data_size": 63488 00:30:50.605 }, 00:30:50.605 { 00:30:50.605 "name": "BaseBdev3", 00:30:50.605 "uuid": "f28341ac-baa6-563c-a26c-5e74a5a532cf", 00:30:50.606 "is_configured": true, 00:30:50.606 "data_offset": 2048, 00:30:50.606 "data_size": 63488 00:30:50.606 } 00:30:50.606 ] 00:30:50.606 }' 00:30:50.606 13:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.606 13:59:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:50.864 13:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:50.864 13:59:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:51.123 [2024-10-09 13:59:57.506516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.060 "name": "raid_bdev1", 00:30:52.060 "uuid": "787d7df6-47c0-434c-ac70-16e8103e54b7", 00:30:52.060 "strip_size_kb": 64, 00:30:52.060 "state": "online", 00:30:52.060 "raid_level": "raid0", 00:30:52.060 "superblock": true, 00:30:52.060 "num_base_bdevs": 3, 00:30:52.060 "num_base_bdevs_discovered": 3, 00:30:52.060 "num_base_bdevs_operational": 3, 00:30:52.060 "base_bdevs_list": [ 00:30:52.060 { 00:30:52.060 "name": "BaseBdev1", 00:30:52.060 "uuid": "5a835be1-b84c-523e-b89f-fef4803c58b5", 00:30:52.060 "is_configured": true, 00:30:52.060 "data_offset": 2048, 00:30:52.060 "data_size": 63488 00:30:52.060 }, 00:30:52.060 { 00:30:52.060 "name": "BaseBdev2", 00:30:52.060 "uuid": "3033c5d4-b19c-5096-a981-7429f27c6363", 00:30:52.060 "is_configured": true, 00:30:52.060 "data_offset": 2048, 00:30:52.060 "data_size": 63488 00:30:52.060 }, 00:30:52.060 { 00:30:52.060 "name": "BaseBdev3", 00:30:52.060 "uuid": "f28341ac-baa6-563c-a26c-5e74a5a532cf", 00:30:52.060 "is_configured": true, 00:30:52.060 "data_offset": 2048, 00:30:52.060 "data_size": 63488 00:30:52.060 } 00:30:52.060 ] 00:30:52.060 }' 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.060 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.319 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:52.319 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.319 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.319 [2024-10-09 13:59:58.866011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:52.578 [2024-10-09 13:59:58.866188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:52.578 [2024-10-09 13:59:58.869165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:52.578 [2024-10-09 13:59:58.869222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.578 [2024-10-09 13:59:58.869264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:52.578 [2024-10-09 13:59:58.869280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:30:52.578 { 00:30:52.578 "results": [ 00:30:52.578 { 00:30:52.578 "job": "raid_bdev1", 00:30:52.578 "core_mask": "0x1", 00:30:52.578 "workload": "randrw", 00:30:52.578 "percentage": 50, 00:30:52.578 "status": "finished", 00:30:52.578 "queue_depth": 1, 00:30:52.578 "io_size": 131072, 00:30:52.578 "runtime": 1.357143, 00:30:52.578 "iops": 15804.5246521553, 00:30:52.578 "mibps": 1975.5655815194125, 00:30:52.578 "io_failed": 1, 00:30:52.578 "io_timeout": 0, 00:30:52.578 "avg_latency_us": 87.49141418581418, 00:30:52.578 "min_latency_us": 25.47809523809524, 00:30:52.578 "max_latency_us": 1654.0038095238094 00:30:52.578 } 00:30:52.578 ], 00:30:52.578 "core_count": 1 00:30:52.578 } 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76932 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76932 ']' 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76932 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76932 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76932' 00:30:52.578 killing process with pid 76932 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76932 00:30:52.578 [2024-10-09 13:59:58.915296] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:52.578 13:59:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76932 00:30:52.578 [2024-10-09 13:59:58.943152] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P5H8ljkePh 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:30:52.836 00:30:52.836 real 0m3.456s 00:30:52.836 user 0m4.429s 00:30:52.836 sys 0m0.623s 00:30:52.836 ************************************ 00:30:52.836 END TEST raid_write_error_test 00:30:52.836 ************************************ 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:52.836 13:59:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:52.836 13:59:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:30:52.836 13:59:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:30:52.836 13:59:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:52.836 13:59:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.836 13:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:52.836 ************************************ 00:30:52.836 START TEST raid_state_function_test 00:30:52.836 ************************************ 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.836 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77064 00:30:52.837 Process raid pid: 77064 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77064' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77064 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77064 ']' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:52.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:52.837 13:59:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.095 [2024-10-09 13:59:59.411376] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:53.095 [2024-10-09 13:59:59.411635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.095 [2024-10-09 13:59:59.599498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.354 [2024-10-09 13:59:59.649578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.354 [2024-10-09 13:59:59.694205] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:53.354 [2024-10-09 13:59:59.694456] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.921 [2024-10-09 14:00:00.442059] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:53.921 [2024-10-09 14:00:00.442133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:53.921 [2024-10-09 14:00:00.442160] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:53.921 [2024-10-09 14:00:00.442180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:53.921 [2024-10-09 14:00:00.442192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:53.921 [2024-10-09 14:00:00.442216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.921 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.180 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.180 "name": "Existed_Raid", 00:30:54.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.180 "strip_size_kb": 64, 00:30:54.180 "state": "configuring", 00:30:54.180 "raid_level": "concat", 00:30:54.180 "superblock": false, 00:30:54.180 "num_base_bdevs": 3, 00:30:54.180 "num_base_bdevs_discovered": 0, 00:30:54.180 "num_base_bdevs_operational": 3, 00:30:54.180 "base_bdevs_list": [ 00:30:54.180 { 00:30:54.180 "name": "BaseBdev1", 00:30:54.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.180 "is_configured": false, 00:30:54.180 "data_offset": 0, 00:30:54.180 "data_size": 0 00:30:54.180 }, 00:30:54.180 { 00:30:54.180 "name": "BaseBdev2", 00:30:54.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.180 "is_configured": false, 00:30:54.180 "data_offset": 0, 00:30:54.180 "data_size": 0 00:30:54.180 }, 00:30:54.180 { 00:30:54.180 "name": "BaseBdev3", 00:30:54.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.180 "is_configured": false, 00:30:54.180 "data_offset": 0, 00:30:54.180 "data_size": 0 00:30:54.180 } 00:30:54.180 ] 00:30:54.180 }' 00:30:54.180 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.180 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 [2024-10-09 14:00:00.906017] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.439 [2024-10-09 14:00:00.906066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 [2024-10-09 14:00:00.918055] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:54.439 [2024-10-09 14:00:00.918221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:54.439 [2024-10-09 14:00:00.918313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:54.439 [2024-10-09 14:00:00.918362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:54.439 [2024-10-09 14:00:00.918544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:54.439 [2024-10-09 14:00:00.918655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 [2024-10-09 14:00:00.939876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:54.439 BaseBdev1 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.439 [ 00:30:54.439 { 00:30:54.439 "name": "BaseBdev1", 00:30:54.439 "aliases": [ 00:30:54.439 "e05336d9-e9db-42a0-9d14-e45b758429b9" 00:30:54.439 ], 00:30:54.439 "product_name": "Malloc disk", 00:30:54.439 "block_size": 512, 00:30:54.439 "num_blocks": 65536, 00:30:54.439 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:54.439 "assigned_rate_limits": { 00:30:54.439 "rw_ios_per_sec": 0, 00:30:54.439 "rw_mbytes_per_sec": 0, 00:30:54.439 "r_mbytes_per_sec": 0, 00:30:54.439 "w_mbytes_per_sec": 0 00:30:54.439 }, 00:30:54.439 "claimed": true, 00:30:54.439 "claim_type": "exclusive_write", 00:30:54.439 "zoned": false, 00:30:54.439 "supported_io_types": { 00:30:54.439 "read": true, 00:30:54.439 "write": true, 00:30:54.439 "unmap": true, 00:30:54.439 "flush": true, 00:30:54.439 "reset": true, 00:30:54.439 "nvme_admin": false, 00:30:54.439 "nvme_io": false, 00:30:54.439 "nvme_io_md": false, 00:30:54.439 "write_zeroes": true, 00:30:54.439 "zcopy": true, 00:30:54.439 "get_zone_info": false, 00:30:54.439 "zone_management": false, 00:30:54.439 "zone_append": false, 00:30:54.439 "compare": false, 00:30:54.439 "compare_and_write": false, 00:30:54.439 "abort": true, 00:30:54.439 "seek_hole": false, 00:30:54.439 "seek_data": false, 00:30:54.439 "copy": true, 00:30:54.439 "nvme_iov_md": false 00:30:54.439 }, 00:30:54.439 "memory_domains": [ 00:30:54.439 { 00:30:54.439 "dma_device_id": "system", 00:30:54.439 "dma_device_type": 1 00:30:54.439 }, 00:30:54.439 { 00:30:54.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:54.439 "dma_device_type": 2 00:30:54.439 } 00:30:54.439 ], 00:30:54.439 "driver_specific": {} 00:30:54.439 } 00:30:54.439 ] 00:30:54.439 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.698 14:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.698 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.698 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:54.698 "name": "Existed_Raid", 00:30:54.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.698 "strip_size_kb": 64, 00:30:54.698 "state": "configuring", 00:30:54.698 "raid_level": "concat", 00:30:54.698 "superblock": false, 00:30:54.698 "num_base_bdevs": 3, 00:30:54.698 "num_base_bdevs_discovered": 1, 00:30:54.698 "num_base_bdevs_operational": 3, 00:30:54.698 "base_bdevs_list": [ 00:30:54.698 { 00:30:54.698 "name": "BaseBdev1", 00:30:54.698 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:54.699 "is_configured": true, 00:30:54.699 "data_offset": 0, 00:30:54.699 "data_size": 65536 00:30:54.699 }, 00:30:54.699 { 00:30:54.699 "name": "BaseBdev2", 00:30:54.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.699 "is_configured": false, 00:30:54.699 "data_offset": 0, 00:30:54.699 "data_size": 0 00:30:54.699 }, 00:30:54.699 { 00:30:54.699 "name": "BaseBdev3", 00:30:54.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.699 "is_configured": false, 00:30:54.699 "data_offset": 0, 00:30:54.699 "data_size": 0 00:30:54.699 } 00:30:54.699 ] 00:30:54.699 }' 00:30:54.699 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:54.699 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 [2024-10-09 14:00:01.456086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.958 [2024-10-09 14:00:01.456153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 [2024-10-09 14:00:01.468138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:54.958 [2024-10-09 14:00:01.470734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:54.958 [2024-10-09 14:00:01.470907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:54.958 [2024-10-09 14:00:01.471005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:54.958 [2024-10-09 14:00:01.471059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:54.958 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.220 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.220 "name": "Existed_Raid", 00:30:55.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.220 "strip_size_kb": 64, 00:30:55.220 "state": "configuring", 00:30:55.220 "raid_level": "concat", 00:30:55.220 "superblock": false, 00:30:55.220 "num_base_bdevs": 3, 00:30:55.220 "num_base_bdevs_discovered": 1, 00:30:55.220 "num_base_bdevs_operational": 3, 00:30:55.220 "base_bdevs_list": [ 00:30:55.220 { 00:30:55.220 "name": "BaseBdev1", 00:30:55.220 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:55.220 "is_configured": true, 00:30:55.220 "data_offset": 0, 00:30:55.220 "data_size": 65536 00:30:55.220 }, 00:30:55.220 { 00:30:55.220 "name": "BaseBdev2", 00:30:55.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.220 "is_configured": false, 00:30:55.220 "data_offset": 0, 00:30:55.220 "data_size": 0 00:30:55.220 }, 00:30:55.220 { 00:30:55.220 "name": "BaseBdev3", 00:30:55.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.220 "is_configured": false, 00:30:55.220 "data_offset": 0, 00:30:55.220 "data_size": 0 00:30:55.220 } 00:30:55.220 ] 00:30:55.220 }' 00:30:55.220 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.220 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 [2024-10-09 14:00:01.948162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:55.478 BaseBdev2 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.478 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.478 [ 00:30:55.478 { 00:30:55.478 "name": "BaseBdev2", 00:30:55.478 "aliases": [ 00:30:55.479 "cc59ca12-0af9-4a8f-867b-c66cd06294d7" 00:30:55.479 ], 00:30:55.479 "product_name": "Malloc disk", 00:30:55.479 "block_size": 512, 00:30:55.479 "num_blocks": 65536, 00:30:55.479 "uuid": "cc59ca12-0af9-4a8f-867b-c66cd06294d7", 00:30:55.479 "assigned_rate_limits": { 00:30:55.479 "rw_ios_per_sec": 0, 00:30:55.479 "rw_mbytes_per_sec": 0, 00:30:55.479 "r_mbytes_per_sec": 0, 00:30:55.479 "w_mbytes_per_sec": 0 00:30:55.479 }, 00:30:55.479 "claimed": true, 00:30:55.479 "claim_type": "exclusive_write", 00:30:55.479 "zoned": false, 00:30:55.479 "supported_io_types": { 00:30:55.479 "read": true, 00:30:55.479 "write": true, 00:30:55.479 "unmap": true, 00:30:55.479 "flush": true, 00:30:55.479 "reset": true, 00:30:55.479 "nvme_admin": false, 00:30:55.479 "nvme_io": false, 00:30:55.479 "nvme_io_md": false, 00:30:55.479 "write_zeroes": true, 00:30:55.479 "zcopy": true, 00:30:55.479 "get_zone_info": false, 00:30:55.479 "zone_management": false, 00:30:55.479 "zone_append": false, 00:30:55.479 "compare": false, 00:30:55.479 "compare_and_write": false, 00:30:55.479 "abort": true, 00:30:55.479 "seek_hole": false, 00:30:55.479 "seek_data": false, 00:30:55.479 "copy": true, 00:30:55.479 "nvme_iov_md": false 00:30:55.479 }, 00:30:55.479 "memory_domains": [ 00:30:55.479 { 00:30:55.479 "dma_device_id": "system", 00:30:55.479 "dma_device_type": 1 00:30:55.479 }, 00:30:55.479 { 00:30:55.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.479 "dma_device_type": 2 00:30:55.479 } 00:30:55.479 ], 00:30:55.479 "driver_specific": {} 00:30:55.479 } 00:30:55.479 ] 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.479 14:00:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.479 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.738 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.738 "name": "Existed_Raid", 00:30:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.738 "strip_size_kb": 64, 00:30:55.738 "state": "configuring", 00:30:55.738 "raid_level": "concat", 00:30:55.738 "superblock": false, 00:30:55.738 "num_base_bdevs": 3, 00:30:55.738 "num_base_bdevs_discovered": 2, 00:30:55.738 "num_base_bdevs_operational": 3, 00:30:55.738 "base_bdevs_list": [ 00:30:55.738 { 00:30:55.738 "name": "BaseBdev1", 00:30:55.738 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:55.738 "is_configured": true, 00:30:55.738 "data_offset": 0, 00:30:55.738 "data_size": 65536 00:30:55.738 }, 00:30:55.738 { 00:30:55.738 "name": "BaseBdev2", 00:30:55.738 "uuid": "cc59ca12-0af9-4a8f-867b-c66cd06294d7", 00:30:55.738 "is_configured": true, 00:30:55.738 "data_offset": 0, 00:30:55.738 "data_size": 65536 00:30:55.738 }, 00:30:55.738 { 00:30:55.738 "name": "BaseBdev3", 00:30:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.738 "is_configured": false, 00:30:55.738 "data_offset": 0, 00:30:55.738 "data_size": 0 00:30:55.738 } 00:30:55.738 ] 00:30:55.738 }' 00:30:55.738 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.738 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.997 [2024-10-09 14:00:02.432613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:55.997 [2024-10-09 14:00:02.432931] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:30:55.997 [2024-10-09 14:00:02.432979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:30:55.997 [2024-10-09 14:00:02.433487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:30:55.997 [2024-10-09 14:00:02.433744] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:30:55.997 [2024-10-09 14:00:02.433764] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:30:55.997 [2024-10-09 14:00:02.434050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:55.997 BaseBdev3 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.997 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.997 [ 00:30:55.997 { 00:30:55.997 "name": "BaseBdev3", 00:30:55.997 "aliases": [ 00:30:55.997 "6c0dbc9e-b1f3-4e8e-b2e8-537fc5179927" 00:30:55.997 ], 00:30:55.997 "product_name": "Malloc disk", 00:30:55.997 "block_size": 512, 00:30:55.997 "num_blocks": 65536, 00:30:55.997 "uuid": "6c0dbc9e-b1f3-4e8e-b2e8-537fc5179927", 00:30:55.997 "assigned_rate_limits": { 00:30:55.998 "rw_ios_per_sec": 0, 00:30:55.998 "rw_mbytes_per_sec": 0, 00:30:55.998 "r_mbytes_per_sec": 0, 00:30:55.998 "w_mbytes_per_sec": 0 00:30:55.998 }, 00:30:55.998 "claimed": true, 00:30:55.998 "claim_type": "exclusive_write", 00:30:55.998 "zoned": false, 00:30:55.998 "supported_io_types": { 00:30:55.998 "read": true, 00:30:55.998 "write": true, 00:30:55.998 "unmap": true, 00:30:55.998 "flush": true, 00:30:55.998 "reset": true, 00:30:55.998 "nvme_admin": false, 00:30:55.998 "nvme_io": false, 00:30:55.998 "nvme_io_md": false, 00:30:55.998 "write_zeroes": true, 00:30:55.998 "zcopy": true, 00:30:55.998 "get_zone_info": false, 00:30:55.998 "zone_management": false, 00:30:55.998 "zone_append": false, 00:30:55.998 "compare": false, 00:30:55.998 "compare_and_write": false, 00:30:55.998 "abort": true, 00:30:55.998 "seek_hole": false, 00:30:55.998 "seek_data": false, 00:30:55.998 "copy": true, 00:30:55.998 "nvme_iov_md": false 00:30:55.998 }, 00:30:55.998 "memory_domains": [ 00:30:55.998 { 00:30:55.998 "dma_device_id": "system", 00:30:55.998 "dma_device_type": 1 00:30:55.998 }, 00:30:55.998 { 00:30:55.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:55.998 "dma_device_type": 2 00:30:55.998 } 00:30:55.998 ], 00:30:55.998 "driver_specific": {} 00:30:55.998 } 00:30:55.998 ] 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:55.998 "name": "Existed_Raid", 00:30:55.998 "uuid": "a56cf9f7-683e-4778-82a0-038101e43504", 00:30:55.998 "strip_size_kb": 64, 00:30:55.998 "state": "online", 00:30:55.998 "raid_level": "concat", 00:30:55.998 "superblock": false, 00:30:55.998 "num_base_bdevs": 3, 00:30:55.998 "num_base_bdevs_discovered": 3, 00:30:55.998 "num_base_bdevs_operational": 3, 00:30:55.998 "base_bdevs_list": [ 00:30:55.998 { 00:30:55.998 "name": "BaseBdev1", 00:30:55.998 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:55.998 "is_configured": true, 00:30:55.998 "data_offset": 0, 00:30:55.998 "data_size": 65536 00:30:55.998 }, 00:30:55.998 { 00:30:55.998 "name": "BaseBdev2", 00:30:55.998 "uuid": "cc59ca12-0af9-4a8f-867b-c66cd06294d7", 00:30:55.998 "is_configured": true, 00:30:55.998 "data_offset": 0, 00:30:55.998 "data_size": 65536 00:30:55.998 }, 00:30:55.998 { 00:30:55.998 "name": "BaseBdev3", 00:30:55.998 "uuid": "6c0dbc9e-b1f3-4e8e-b2e8-537fc5179927", 00:30:55.998 "is_configured": true, 00:30:55.998 "data_offset": 0, 00:30:55.998 "data_size": 65536 00:30:55.998 } 00:30:55.998 ] 00:30:55.998 }' 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:55.998 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:56.566 [2024-10-09 14:00:02.925084] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.566 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:56.566 "name": "Existed_Raid", 00:30:56.566 "aliases": [ 00:30:56.566 "a56cf9f7-683e-4778-82a0-038101e43504" 00:30:56.566 ], 00:30:56.566 "product_name": "Raid Volume", 00:30:56.566 "block_size": 512, 00:30:56.566 "num_blocks": 196608, 00:30:56.566 "uuid": "a56cf9f7-683e-4778-82a0-038101e43504", 00:30:56.566 "assigned_rate_limits": { 00:30:56.566 "rw_ios_per_sec": 0, 00:30:56.566 "rw_mbytes_per_sec": 0, 00:30:56.566 "r_mbytes_per_sec": 0, 00:30:56.566 "w_mbytes_per_sec": 0 00:30:56.566 }, 00:30:56.566 "claimed": false, 00:30:56.567 "zoned": false, 00:30:56.567 "supported_io_types": { 00:30:56.567 "read": true, 00:30:56.567 "write": true, 00:30:56.567 "unmap": true, 00:30:56.567 "flush": true, 00:30:56.567 "reset": true, 00:30:56.567 "nvme_admin": false, 00:30:56.567 "nvme_io": false, 00:30:56.567 "nvme_io_md": false, 00:30:56.567 "write_zeroes": true, 00:30:56.567 "zcopy": false, 00:30:56.567 "get_zone_info": false, 00:30:56.567 "zone_management": false, 00:30:56.567 "zone_append": false, 00:30:56.567 "compare": false, 00:30:56.567 "compare_and_write": false, 00:30:56.567 "abort": false, 00:30:56.567 "seek_hole": false, 00:30:56.567 "seek_data": false, 00:30:56.567 "copy": false, 00:30:56.567 "nvme_iov_md": false 00:30:56.567 }, 00:30:56.567 "memory_domains": [ 00:30:56.567 { 00:30:56.567 "dma_device_id": "system", 00:30:56.567 "dma_device_type": 1 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.567 "dma_device_type": 2 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "system", 00:30:56.567 "dma_device_type": 1 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.567 "dma_device_type": 2 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "system", 00:30:56.567 "dma_device_type": 1 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:56.567 "dma_device_type": 2 00:30:56.567 } 00:30:56.567 ], 00:30:56.567 "driver_specific": { 00:30:56.567 "raid": { 00:30:56.567 "uuid": "a56cf9f7-683e-4778-82a0-038101e43504", 00:30:56.567 "strip_size_kb": 64, 00:30:56.567 "state": "online", 00:30:56.567 "raid_level": "concat", 00:30:56.567 "superblock": false, 00:30:56.567 "num_base_bdevs": 3, 00:30:56.567 "num_base_bdevs_discovered": 3, 00:30:56.567 "num_base_bdevs_operational": 3, 00:30:56.567 "base_bdevs_list": [ 00:30:56.567 { 00:30:56.567 "name": "BaseBdev1", 00:30:56.567 "uuid": "e05336d9-e9db-42a0-9d14-e45b758429b9", 00:30:56.567 "is_configured": true, 00:30:56.567 "data_offset": 0, 00:30:56.567 "data_size": 65536 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "name": "BaseBdev2", 00:30:56.567 "uuid": "cc59ca12-0af9-4a8f-867b-c66cd06294d7", 00:30:56.567 "is_configured": true, 00:30:56.567 "data_offset": 0, 00:30:56.567 "data_size": 65536 00:30:56.567 }, 00:30:56.567 { 00:30:56.567 "name": "BaseBdev3", 00:30:56.567 "uuid": "6c0dbc9e-b1f3-4e8e-b2e8-537fc5179927", 00:30:56.567 "is_configured": true, 00:30:56.567 "data_offset": 0, 00:30:56.567 "data_size": 65536 00:30:56.567 } 00:30:56.567 ] 00:30:56.567 } 00:30:56.567 } 00:30:56.567 }' 00:30:56.567 14:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:56.567 BaseBdev2 00:30:56.567 BaseBdev3' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.567 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.825 [2024-10-09 14:00:03.184884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:56.825 [2024-10-09 14:00:03.184913] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:56.825 [2024-10-09 14:00:03.184973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.825 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:56.826 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.826 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:56.826 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.826 "name": "Existed_Raid", 00:30:56.826 "uuid": "a56cf9f7-683e-4778-82a0-038101e43504", 00:30:56.826 "strip_size_kb": 64, 00:30:56.826 "state": "offline", 00:30:56.826 "raid_level": "concat", 00:30:56.826 "superblock": false, 00:30:56.826 "num_base_bdevs": 3, 00:30:56.826 "num_base_bdevs_discovered": 2, 00:30:56.826 "num_base_bdevs_operational": 2, 00:30:56.826 "base_bdevs_list": [ 00:30:56.826 { 00:30:56.826 "name": null, 00:30:56.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.826 "is_configured": false, 00:30:56.826 "data_offset": 0, 00:30:56.826 "data_size": 65536 00:30:56.826 }, 00:30:56.826 { 00:30:56.826 "name": "BaseBdev2", 00:30:56.826 "uuid": "cc59ca12-0af9-4a8f-867b-c66cd06294d7", 00:30:56.826 "is_configured": true, 00:30:56.826 "data_offset": 0, 00:30:56.826 "data_size": 65536 00:30:56.826 }, 00:30:56.826 { 00:30:56.826 "name": "BaseBdev3", 00:30:56.826 "uuid": "6c0dbc9e-b1f3-4e8e-b2e8-537fc5179927", 00:30:56.826 "is_configured": true, 00:30:56.826 "data_offset": 0, 00:30:56.826 "data_size": 65536 00:30:56.826 } 00:30:56.826 ] 00:30:56.826 }' 00:30:56.826 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.826 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 [2024-10-09 14:00:03.701396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 [2024-10-09 14:00:03.765493] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:57.394 [2024-10-09 14:00:03.765544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.394 BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:57.394 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 [ 00:30:57.395 { 00:30:57.395 "name": "BaseBdev2", 00:30:57.395 "aliases": [ 00:30:57.395 "d63b3695-c089-41a9-857e-771b9020aa04" 00:30:57.395 ], 00:30:57.395 "product_name": "Malloc disk", 00:30:57.395 "block_size": 512, 00:30:57.395 "num_blocks": 65536, 00:30:57.395 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:57.395 "assigned_rate_limits": { 00:30:57.395 "rw_ios_per_sec": 0, 00:30:57.395 "rw_mbytes_per_sec": 0, 00:30:57.395 "r_mbytes_per_sec": 0, 00:30:57.395 "w_mbytes_per_sec": 0 00:30:57.395 }, 00:30:57.395 "claimed": false, 00:30:57.395 "zoned": false, 00:30:57.395 "supported_io_types": { 00:30:57.395 "read": true, 00:30:57.395 "write": true, 00:30:57.395 "unmap": true, 00:30:57.395 "flush": true, 00:30:57.395 "reset": true, 00:30:57.395 "nvme_admin": false, 00:30:57.395 "nvme_io": false, 00:30:57.395 "nvme_io_md": false, 00:30:57.395 "write_zeroes": true, 00:30:57.395 "zcopy": true, 00:30:57.395 "get_zone_info": false, 00:30:57.395 "zone_management": false, 00:30:57.395 "zone_append": false, 00:30:57.395 "compare": false, 00:30:57.395 "compare_and_write": false, 00:30:57.395 "abort": true, 00:30:57.395 "seek_hole": false, 00:30:57.395 "seek_data": false, 00:30:57.395 "copy": true, 00:30:57.395 "nvme_iov_md": false 00:30:57.395 }, 00:30:57.395 "memory_domains": [ 00:30:57.395 { 00:30:57.395 "dma_device_id": "system", 00:30:57.395 "dma_device_type": 1 00:30:57.395 }, 00:30:57.395 { 00:30:57.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.395 "dma_device_type": 2 00:30:57.395 } 00:30:57.395 ], 00:30:57.395 "driver_specific": {} 00:30:57.395 } 00:30:57.395 ] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 BaseBdev3 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 [ 00:30:57.395 { 00:30:57.395 "name": "BaseBdev3", 00:30:57.395 "aliases": [ 00:30:57.395 "60983001-630d-4afa-aa06-3476db2b0644" 00:30:57.395 ], 00:30:57.395 "product_name": "Malloc disk", 00:30:57.395 "block_size": 512, 00:30:57.395 "num_blocks": 65536, 00:30:57.395 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:57.395 "assigned_rate_limits": { 00:30:57.395 "rw_ios_per_sec": 0, 00:30:57.395 "rw_mbytes_per_sec": 0, 00:30:57.395 "r_mbytes_per_sec": 0, 00:30:57.395 "w_mbytes_per_sec": 0 00:30:57.395 }, 00:30:57.395 "claimed": false, 00:30:57.395 "zoned": false, 00:30:57.395 "supported_io_types": { 00:30:57.395 "read": true, 00:30:57.395 "write": true, 00:30:57.395 "unmap": true, 00:30:57.395 "flush": true, 00:30:57.395 "reset": true, 00:30:57.395 "nvme_admin": false, 00:30:57.395 "nvme_io": false, 00:30:57.395 "nvme_io_md": false, 00:30:57.395 "write_zeroes": true, 00:30:57.395 "zcopy": true, 00:30:57.395 "get_zone_info": false, 00:30:57.395 "zone_management": false, 00:30:57.395 "zone_append": false, 00:30:57.395 "compare": false, 00:30:57.395 "compare_and_write": false, 00:30:57.395 "abort": true, 00:30:57.395 "seek_hole": false, 00:30:57.395 "seek_data": false, 00:30:57.395 "copy": true, 00:30:57.395 "nvme_iov_md": false 00:30:57.395 }, 00:30:57.395 "memory_domains": [ 00:30:57.395 { 00:30:57.395 "dma_device_id": "system", 00:30:57.395 "dma_device_type": 1 00:30:57.395 }, 00:30:57.395 { 00:30:57.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.395 "dma_device_type": 2 00:30:57.395 } 00:30:57.395 ], 00:30:57.395 "driver_specific": {} 00:30:57.395 } 00:30:57.395 ] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.395 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 [2024-10-09 14:00:03.940352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:57.395 [2024-10-09 14:00:03.940537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:57.395 [2024-10-09 14:00:03.940721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:57.395 [2024-10-09 14:00:03.943384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.654 "name": "Existed_Raid", 00:30:57.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.654 "strip_size_kb": 64, 00:30:57.654 "state": "configuring", 00:30:57.654 "raid_level": "concat", 00:30:57.654 "superblock": false, 00:30:57.654 "num_base_bdevs": 3, 00:30:57.654 "num_base_bdevs_discovered": 2, 00:30:57.654 "num_base_bdevs_operational": 3, 00:30:57.654 "base_bdevs_list": [ 00:30:57.654 { 00:30:57.654 "name": "BaseBdev1", 00:30:57.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.654 "is_configured": false, 00:30:57.654 "data_offset": 0, 00:30:57.654 "data_size": 0 00:30:57.654 }, 00:30:57.654 { 00:30:57.654 "name": "BaseBdev2", 00:30:57.654 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:57.654 "is_configured": true, 00:30:57.654 "data_offset": 0, 00:30:57.654 "data_size": 65536 00:30:57.654 }, 00:30:57.654 { 00:30:57.654 "name": "BaseBdev3", 00:30:57.654 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:57.654 "is_configured": true, 00:30:57.654 "data_offset": 0, 00:30:57.654 "data_size": 65536 00:30:57.654 } 00:30:57.654 ] 00:30:57.654 }' 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.654 14:00:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.913 [2024-10-09 14:00:04.396474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.913 "name": "Existed_Raid", 00:30:57.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.913 "strip_size_kb": 64, 00:30:57.913 "state": "configuring", 00:30:57.913 "raid_level": "concat", 00:30:57.913 "superblock": false, 00:30:57.913 "num_base_bdevs": 3, 00:30:57.913 "num_base_bdevs_discovered": 1, 00:30:57.913 "num_base_bdevs_operational": 3, 00:30:57.913 "base_bdevs_list": [ 00:30:57.913 { 00:30:57.913 "name": "BaseBdev1", 00:30:57.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.913 "is_configured": false, 00:30:57.913 "data_offset": 0, 00:30:57.913 "data_size": 0 00:30:57.913 }, 00:30:57.913 { 00:30:57.913 "name": null, 00:30:57.913 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:57.913 "is_configured": false, 00:30:57.913 "data_offset": 0, 00:30:57.913 "data_size": 65536 00:30:57.913 }, 00:30:57.913 { 00:30:57.913 "name": "BaseBdev3", 00:30:57.913 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:57.913 "is_configured": true, 00:30:57.913 "data_offset": 0, 00:30:57.913 "data_size": 65536 00:30:57.913 } 00:30:57.913 ] 00:30:57.913 }' 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.913 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 [2024-10-09 14:00:04.963850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:58.482 BaseBdev1 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 [ 00:30:58.482 { 00:30:58.482 "name": "BaseBdev1", 00:30:58.482 "aliases": [ 00:30:58.482 "18cc0c93-93f7-4f58-a889-13d2ea030fa8" 00:30:58.482 ], 00:30:58.482 "product_name": "Malloc disk", 00:30:58.482 "block_size": 512, 00:30:58.482 "num_blocks": 65536, 00:30:58.482 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:30:58.482 "assigned_rate_limits": { 00:30:58.482 "rw_ios_per_sec": 0, 00:30:58.482 "rw_mbytes_per_sec": 0, 00:30:58.482 "r_mbytes_per_sec": 0, 00:30:58.482 "w_mbytes_per_sec": 0 00:30:58.482 }, 00:30:58.482 "claimed": true, 00:30:58.482 "claim_type": "exclusive_write", 00:30:58.482 "zoned": false, 00:30:58.482 "supported_io_types": { 00:30:58.482 "read": true, 00:30:58.482 "write": true, 00:30:58.482 "unmap": true, 00:30:58.482 "flush": true, 00:30:58.482 "reset": true, 00:30:58.482 "nvme_admin": false, 00:30:58.482 "nvme_io": false, 00:30:58.482 "nvme_io_md": false, 00:30:58.482 "write_zeroes": true, 00:30:58.482 "zcopy": true, 00:30:58.482 "get_zone_info": false, 00:30:58.482 "zone_management": false, 00:30:58.482 "zone_append": false, 00:30:58.482 "compare": false, 00:30:58.482 "compare_and_write": false, 00:30:58.482 "abort": true, 00:30:58.482 "seek_hole": false, 00:30:58.482 "seek_data": false, 00:30:58.482 "copy": true, 00:30:58.482 "nvme_iov_md": false 00:30:58.482 }, 00:30:58.482 "memory_domains": [ 00:30:58.482 { 00:30:58.482 "dma_device_id": "system", 00:30:58.482 "dma_device_type": 1 00:30:58.482 }, 00:30:58.482 { 00:30:58.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.482 "dma_device_type": 2 00:30:58.482 } 00:30:58.482 ], 00:30:58.482 "driver_specific": {} 00:30:58.482 } 00:30:58.482 ] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:30:58.482 14:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.482 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.741 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.741 "name": "Existed_Raid", 00:30:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.741 "strip_size_kb": 64, 00:30:58.741 "state": "configuring", 00:30:58.741 "raid_level": "concat", 00:30:58.741 "superblock": false, 00:30:58.741 "num_base_bdevs": 3, 00:30:58.741 "num_base_bdevs_discovered": 2, 00:30:58.741 "num_base_bdevs_operational": 3, 00:30:58.741 "base_bdevs_list": [ 00:30:58.741 { 00:30:58.741 "name": "BaseBdev1", 00:30:58.741 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:30:58.741 "is_configured": true, 00:30:58.741 "data_offset": 0, 00:30:58.741 "data_size": 65536 00:30:58.741 }, 00:30:58.741 { 00:30:58.741 "name": null, 00:30:58.741 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:58.741 "is_configured": false, 00:30:58.741 "data_offset": 0, 00:30:58.741 "data_size": 65536 00:30:58.741 }, 00:30:58.741 { 00:30:58.741 "name": "BaseBdev3", 00:30:58.741 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:58.741 "is_configured": true, 00:30:58.741 "data_offset": 0, 00:30:58.741 "data_size": 65536 00:30:58.741 } 00:30:58.741 ] 00:30:58.741 }' 00:30:58.741 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.741 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.000 [2024-10-09 14:00:05.488060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.000 "name": "Existed_Raid", 00:30:59.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.000 "strip_size_kb": 64, 00:30:59.000 "state": "configuring", 00:30:59.000 "raid_level": "concat", 00:30:59.000 "superblock": false, 00:30:59.000 "num_base_bdevs": 3, 00:30:59.000 "num_base_bdevs_discovered": 1, 00:30:59.000 "num_base_bdevs_operational": 3, 00:30:59.000 "base_bdevs_list": [ 00:30:59.000 { 00:30:59.000 "name": "BaseBdev1", 00:30:59.000 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:30:59.000 "is_configured": true, 00:30:59.000 "data_offset": 0, 00:30:59.000 "data_size": 65536 00:30:59.000 }, 00:30:59.000 { 00:30:59.000 "name": null, 00:30:59.000 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:59.000 "is_configured": false, 00:30:59.000 "data_offset": 0, 00:30:59.000 "data_size": 65536 00:30:59.000 }, 00:30:59.000 { 00:30:59.000 "name": null, 00:30:59.000 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:59.000 "is_configured": false, 00:30:59.000 "data_offset": 0, 00:30:59.000 "data_size": 65536 00:30:59.000 } 00:30:59.000 ] 00:30:59.000 }' 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.000 14:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.567 [2024-10-09 14:00:06.100275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.567 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.825 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.825 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.825 "name": "Existed_Raid", 00:30:59.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.825 "strip_size_kb": 64, 00:30:59.825 "state": "configuring", 00:30:59.825 "raid_level": "concat", 00:30:59.825 "superblock": false, 00:30:59.825 "num_base_bdevs": 3, 00:30:59.825 "num_base_bdevs_discovered": 2, 00:30:59.825 "num_base_bdevs_operational": 3, 00:30:59.825 "base_bdevs_list": [ 00:30:59.825 { 00:30:59.825 "name": "BaseBdev1", 00:30:59.825 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:30:59.825 "is_configured": true, 00:30:59.825 "data_offset": 0, 00:30:59.825 "data_size": 65536 00:30:59.825 }, 00:30:59.825 { 00:30:59.825 "name": null, 00:30:59.825 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:30:59.825 "is_configured": false, 00:30:59.825 "data_offset": 0, 00:30:59.825 "data_size": 65536 00:30:59.825 }, 00:30:59.825 { 00:30:59.825 "name": "BaseBdev3", 00:30:59.825 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:30:59.825 "is_configured": true, 00:30:59.825 "data_offset": 0, 00:30:59.825 "data_size": 65536 00:30:59.825 } 00:30:59.825 ] 00:30:59.825 }' 00:30:59.825 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.825 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.084 [2024-10-09 14:00:06.604444] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.084 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.343 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.343 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:00.343 "name": "Existed_Raid", 00:31:00.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.343 "strip_size_kb": 64, 00:31:00.343 "state": "configuring", 00:31:00.343 "raid_level": "concat", 00:31:00.343 "superblock": false, 00:31:00.343 "num_base_bdevs": 3, 00:31:00.343 "num_base_bdevs_discovered": 1, 00:31:00.343 "num_base_bdevs_operational": 3, 00:31:00.343 "base_bdevs_list": [ 00:31:00.343 { 00:31:00.343 "name": null, 00:31:00.343 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:31:00.343 "is_configured": false, 00:31:00.343 "data_offset": 0, 00:31:00.343 "data_size": 65536 00:31:00.343 }, 00:31:00.343 { 00:31:00.343 "name": null, 00:31:00.343 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:31:00.343 "is_configured": false, 00:31:00.343 "data_offset": 0, 00:31:00.343 "data_size": 65536 00:31:00.343 }, 00:31:00.343 { 00:31:00.343 "name": "BaseBdev3", 00:31:00.343 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:31:00.343 "is_configured": true, 00:31:00.343 "data_offset": 0, 00:31:00.343 "data_size": 65536 00:31:00.343 } 00:31:00.343 ] 00:31:00.343 }' 00:31:00.343 14:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:00.343 14:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.601 [2024-10-09 14:00:07.111626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:00.601 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:00.871 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:00.871 "name": "Existed_Raid", 00:31:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.871 "strip_size_kb": 64, 00:31:00.871 "state": "configuring", 00:31:00.871 "raid_level": "concat", 00:31:00.871 "superblock": false, 00:31:00.871 "num_base_bdevs": 3, 00:31:00.871 "num_base_bdevs_discovered": 2, 00:31:00.871 "num_base_bdevs_operational": 3, 00:31:00.871 "base_bdevs_list": [ 00:31:00.871 { 00:31:00.871 "name": null, 00:31:00.871 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:31:00.871 "is_configured": false, 00:31:00.871 "data_offset": 0, 00:31:00.871 "data_size": 65536 00:31:00.871 }, 00:31:00.871 { 00:31:00.871 "name": "BaseBdev2", 00:31:00.871 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:31:00.871 "is_configured": true, 00:31:00.871 "data_offset": 0, 00:31:00.871 "data_size": 65536 00:31:00.871 }, 00:31:00.871 { 00:31:00.871 "name": "BaseBdev3", 00:31:00.872 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:31:00.872 "is_configured": true, 00:31:00.872 "data_offset": 0, 00:31:00.872 "data_size": 65536 00:31:00.872 } 00:31:00.872 ] 00:31:00.872 }' 00:31:00.872 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:00.872 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.189 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18cc0c93-93f7-4f58-a889-13d2ea030fa8 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.190 [2024-10-09 14:00:07.687157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:01.190 [2024-10-09 14:00:07.687204] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:01.190 [2024-10-09 14:00:07.687217] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:31:01.190 [2024-10-09 14:00:07.687516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:01.190 [2024-10-09 14:00:07.687676] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:01.190 [2024-10-09 14:00:07.687689] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:31:01.190 [2024-10-09 14:00:07.687917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.190 NewBaseBdev 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.190 [ 00:31:01.190 { 00:31:01.190 "name": "NewBaseBdev", 00:31:01.190 "aliases": [ 00:31:01.190 "18cc0c93-93f7-4f58-a889-13d2ea030fa8" 00:31:01.190 ], 00:31:01.190 "product_name": "Malloc disk", 00:31:01.190 "block_size": 512, 00:31:01.190 "num_blocks": 65536, 00:31:01.190 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:31:01.190 "assigned_rate_limits": { 00:31:01.190 "rw_ios_per_sec": 0, 00:31:01.190 "rw_mbytes_per_sec": 0, 00:31:01.190 "r_mbytes_per_sec": 0, 00:31:01.190 "w_mbytes_per_sec": 0 00:31:01.190 }, 00:31:01.190 "claimed": true, 00:31:01.190 "claim_type": "exclusive_write", 00:31:01.190 "zoned": false, 00:31:01.190 "supported_io_types": { 00:31:01.190 "read": true, 00:31:01.190 "write": true, 00:31:01.190 "unmap": true, 00:31:01.190 "flush": true, 00:31:01.190 "reset": true, 00:31:01.190 "nvme_admin": false, 00:31:01.190 "nvme_io": false, 00:31:01.190 "nvme_io_md": false, 00:31:01.190 "write_zeroes": true, 00:31:01.190 "zcopy": true, 00:31:01.190 "get_zone_info": false, 00:31:01.190 "zone_management": false, 00:31:01.190 "zone_append": false, 00:31:01.190 "compare": false, 00:31:01.190 "compare_and_write": false, 00:31:01.190 "abort": true, 00:31:01.190 "seek_hole": false, 00:31:01.190 "seek_data": false, 00:31:01.190 "copy": true, 00:31:01.190 "nvme_iov_md": false 00:31:01.190 }, 00:31:01.190 "memory_domains": [ 00:31:01.190 { 00:31:01.190 "dma_device_id": "system", 00:31:01.190 "dma_device_type": 1 00:31:01.190 }, 00:31:01.190 { 00:31:01.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.190 "dma_device_type": 2 00:31:01.190 } 00:31:01.190 ], 00:31:01.190 "driver_specific": {} 00:31:01.190 } 00:31:01.190 ] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.190 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.449 "name": "Existed_Raid", 00:31:01.449 "uuid": "270a85da-e3d7-45d8-af0c-0b8a48f26e59", 00:31:01.449 "strip_size_kb": 64, 00:31:01.449 "state": "online", 00:31:01.449 "raid_level": "concat", 00:31:01.449 "superblock": false, 00:31:01.449 "num_base_bdevs": 3, 00:31:01.449 "num_base_bdevs_discovered": 3, 00:31:01.449 "num_base_bdevs_operational": 3, 00:31:01.449 "base_bdevs_list": [ 00:31:01.449 { 00:31:01.449 "name": "NewBaseBdev", 00:31:01.449 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:31:01.449 "is_configured": true, 00:31:01.449 "data_offset": 0, 00:31:01.449 "data_size": 65536 00:31:01.449 }, 00:31:01.449 { 00:31:01.449 "name": "BaseBdev2", 00:31:01.449 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:31:01.449 "is_configured": true, 00:31:01.449 "data_offset": 0, 00:31:01.449 "data_size": 65536 00:31:01.449 }, 00:31:01.449 { 00:31:01.449 "name": "BaseBdev3", 00:31:01.449 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:31:01.449 "is_configured": true, 00:31:01.449 "data_offset": 0, 00:31:01.449 "data_size": 65536 00:31:01.449 } 00:31:01.449 ] 00:31:01.449 }' 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.449 14:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.708 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.708 [2024-10-09 14:00:08.239739] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.967 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.968 "name": "Existed_Raid", 00:31:01.968 "aliases": [ 00:31:01.968 "270a85da-e3d7-45d8-af0c-0b8a48f26e59" 00:31:01.968 ], 00:31:01.968 "product_name": "Raid Volume", 00:31:01.968 "block_size": 512, 00:31:01.968 "num_blocks": 196608, 00:31:01.968 "uuid": "270a85da-e3d7-45d8-af0c-0b8a48f26e59", 00:31:01.968 "assigned_rate_limits": { 00:31:01.968 "rw_ios_per_sec": 0, 00:31:01.968 "rw_mbytes_per_sec": 0, 00:31:01.968 "r_mbytes_per_sec": 0, 00:31:01.968 "w_mbytes_per_sec": 0 00:31:01.968 }, 00:31:01.968 "claimed": false, 00:31:01.968 "zoned": false, 00:31:01.968 "supported_io_types": { 00:31:01.968 "read": true, 00:31:01.968 "write": true, 00:31:01.968 "unmap": true, 00:31:01.968 "flush": true, 00:31:01.968 "reset": true, 00:31:01.968 "nvme_admin": false, 00:31:01.968 "nvme_io": false, 00:31:01.968 "nvme_io_md": false, 00:31:01.968 "write_zeroes": true, 00:31:01.968 "zcopy": false, 00:31:01.968 "get_zone_info": false, 00:31:01.968 "zone_management": false, 00:31:01.968 "zone_append": false, 00:31:01.968 "compare": false, 00:31:01.968 "compare_and_write": false, 00:31:01.968 "abort": false, 00:31:01.968 "seek_hole": false, 00:31:01.968 "seek_data": false, 00:31:01.968 "copy": false, 00:31:01.968 "nvme_iov_md": false 00:31:01.968 }, 00:31:01.968 "memory_domains": [ 00:31:01.968 { 00:31:01.968 "dma_device_id": "system", 00:31:01.968 "dma_device_type": 1 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.968 "dma_device_type": 2 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "dma_device_id": "system", 00:31:01.968 "dma_device_type": 1 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.968 "dma_device_type": 2 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "dma_device_id": "system", 00:31:01.968 "dma_device_type": 1 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.968 "dma_device_type": 2 00:31:01.968 } 00:31:01.968 ], 00:31:01.968 "driver_specific": { 00:31:01.968 "raid": { 00:31:01.968 "uuid": "270a85da-e3d7-45d8-af0c-0b8a48f26e59", 00:31:01.968 "strip_size_kb": 64, 00:31:01.968 "state": "online", 00:31:01.968 "raid_level": "concat", 00:31:01.968 "superblock": false, 00:31:01.968 "num_base_bdevs": 3, 00:31:01.968 "num_base_bdevs_discovered": 3, 00:31:01.968 "num_base_bdevs_operational": 3, 00:31:01.968 "base_bdevs_list": [ 00:31:01.968 { 00:31:01.968 "name": "NewBaseBdev", 00:31:01.968 "uuid": "18cc0c93-93f7-4f58-a889-13d2ea030fa8", 00:31:01.968 "is_configured": true, 00:31:01.968 "data_offset": 0, 00:31:01.968 "data_size": 65536 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "name": "BaseBdev2", 00:31:01.968 "uuid": "d63b3695-c089-41a9-857e-771b9020aa04", 00:31:01.968 "is_configured": true, 00:31:01.968 "data_offset": 0, 00:31:01.968 "data_size": 65536 00:31:01.968 }, 00:31:01.968 { 00:31:01.968 "name": "BaseBdev3", 00:31:01.968 "uuid": "60983001-630d-4afa-aa06-3476db2b0644", 00:31:01.968 "is_configured": true, 00:31:01.968 "data_offset": 0, 00:31:01.968 "data_size": 65536 00:31:01.968 } 00:31:01.968 ] 00:31:01.968 } 00:31:01.968 } 00:31:01.968 }' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:01.968 BaseBdev2 00:31:01.968 BaseBdev3' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.968 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.968 [2024-10-09 14:00:08.511453] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:01.968 [2024-10-09 14:00:08.511488] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:01.968 [2024-10-09 14:00:08.511574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:01.968 [2024-10-09 14:00:08.511654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:01.968 [2024-10-09 14:00:08.511670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77064 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77064 ']' 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77064 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77064 00:31:02.226 killing process with pid 77064 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77064' 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 77064 00:31:02.226 [2024-10-09 14:00:08.553756] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:02.226 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 77064 00:31:02.226 [2024-10-09 14:00:08.585982] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:02.484 00:31:02.484 real 0m9.556s 00:31:02.484 user 0m16.478s 00:31:02.484 sys 0m1.959s 00:31:02.484 ************************************ 00:31:02.484 END TEST raid_state_function_test 00:31:02.484 ************************************ 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.484 14:00:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:31:02.484 14:00:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:02.484 14:00:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.484 14:00:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:02.484 ************************************ 00:31:02.484 START TEST raid_state_function_test_sb 00:31:02.484 ************************************ 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77674 00:31:02.484 Process raid pid: 77674 00:31:02.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77674' 00:31:02.484 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77674 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77674 ']' 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:02.485 14:00:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:02.485 [2024-10-09 14:00:08.989381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:02.485 [2024-10-09 14:00:08.989519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.743 [2024-10-09 14:00:09.149507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.743 [2024-10-09 14:00:09.196740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.743 [2024-10-09 14:00:09.240277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:02.743 [2024-10-09 14:00:09.240540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:03.679 [2024-10-09 14:00:09.975457] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:03.679 [2024-10-09 14:00:09.975514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:03.679 [2024-10-09 14:00:09.975531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:03.679 [2024-10-09 14:00:09.975544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:03.679 [2024-10-09 14:00:09.975564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:03.679 [2024-10-09 14:00:09.975580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:03.679 14:00:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:03.679 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.679 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:03.679 "name": "Existed_Raid", 00:31:03.679 "uuid": "a7b43d74-aa32-4d1d-b83c-b0202299875b", 00:31:03.679 "strip_size_kb": 64, 00:31:03.679 "state": "configuring", 00:31:03.679 "raid_level": "concat", 00:31:03.679 "superblock": true, 00:31:03.679 "num_base_bdevs": 3, 00:31:03.679 "num_base_bdevs_discovered": 0, 00:31:03.679 "num_base_bdevs_operational": 3, 00:31:03.679 "base_bdevs_list": [ 00:31:03.679 { 00:31:03.679 "name": "BaseBdev1", 00:31:03.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.679 "is_configured": false, 00:31:03.679 "data_offset": 0, 00:31:03.679 "data_size": 0 00:31:03.679 }, 00:31:03.679 { 00:31:03.679 "name": "BaseBdev2", 00:31:03.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.679 "is_configured": false, 00:31:03.679 "data_offset": 0, 00:31:03.679 "data_size": 0 00:31:03.679 }, 00:31:03.679 { 00:31:03.679 "name": "BaseBdev3", 00:31:03.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.679 "is_configured": false, 00:31:03.679 "data_offset": 0, 00:31:03.679 "data_size": 0 00:31:03.679 } 00:31:03.679 ] 00:31:03.679 }' 00:31:03.679 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:03.679 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.246 [2024-10-09 14:00:10.495498] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:04.246 [2024-10-09 14:00:10.495547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.246 [2024-10-09 14:00:10.507558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:04.246 [2024-10-09 14:00:10.507750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:04.246 [2024-10-09 14:00:10.507773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:04.246 [2024-10-09 14:00:10.507789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:04.246 [2024-10-09 14:00:10.507797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:04.246 [2024-10-09 14:00:10.507811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.246 [2024-10-09 14:00:10.525397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:04.246 BaseBdev1 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:04.246 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.247 [ 00:31:04.247 { 00:31:04.247 "name": "BaseBdev1", 00:31:04.247 "aliases": [ 00:31:04.247 "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630" 00:31:04.247 ], 00:31:04.247 "product_name": "Malloc disk", 00:31:04.247 "block_size": 512, 00:31:04.247 "num_blocks": 65536, 00:31:04.247 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:04.247 "assigned_rate_limits": { 00:31:04.247 "rw_ios_per_sec": 0, 00:31:04.247 "rw_mbytes_per_sec": 0, 00:31:04.247 "r_mbytes_per_sec": 0, 00:31:04.247 "w_mbytes_per_sec": 0 00:31:04.247 }, 00:31:04.247 "claimed": true, 00:31:04.247 "claim_type": "exclusive_write", 00:31:04.247 "zoned": false, 00:31:04.247 "supported_io_types": { 00:31:04.247 "read": true, 00:31:04.247 "write": true, 00:31:04.247 "unmap": true, 00:31:04.247 "flush": true, 00:31:04.247 "reset": true, 00:31:04.247 "nvme_admin": false, 00:31:04.247 "nvme_io": false, 00:31:04.247 "nvme_io_md": false, 00:31:04.247 "write_zeroes": true, 00:31:04.247 "zcopy": true, 00:31:04.247 "get_zone_info": false, 00:31:04.247 "zone_management": false, 00:31:04.247 "zone_append": false, 00:31:04.247 "compare": false, 00:31:04.247 "compare_and_write": false, 00:31:04.247 "abort": true, 00:31:04.247 "seek_hole": false, 00:31:04.247 "seek_data": false, 00:31:04.247 "copy": true, 00:31:04.247 "nvme_iov_md": false 00:31:04.247 }, 00:31:04.247 "memory_domains": [ 00:31:04.247 { 00:31:04.247 "dma_device_id": "system", 00:31:04.247 "dma_device_type": 1 00:31:04.247 }, 00:31:04.247 { 00:31:04.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:04.247 "dma_device_type": 2 00:31:04.247 } 00:31:04.247 ], 00:31:04.247 "driver_specific": {} 00:31:04.247 } 00:31:04.247 ] 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.247 "name": "Existed_Raid", 00:31:04.247 "uuid": "fbb5ee2b-2123-43e0-9cd8-8d78fc61e830", 00:31:04.247 "strip_size_kb": 64, 00:31:04.247 "state": "configuring", 00:31:04.247 "raid_level": "concat", 00:31:04.247 "superblock": true, 00:31:04.247 "num_base_bdevs": 3, 00:31:04.247 "num_base_bdevs_discovered": 1, 00:31:04.247 "num_base_bdevs_operational": 3, 00:31:04.247 "base_bdevs_list": [ 00:31:04.247 { 00:31:04.247 "name": "BaseBdev1", 00:31:04.247 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:04.247 "is_configured": true, 00:31:04.247 "data_offset": 2048, 00:31:04.247 "data_size": 63488 00:31:04.247 }, 00:31:04.247 { 00:31:04.247 "name": "BaseBdev2", 00:31:04.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.247 "is_configured": false, 00:31:04.247 "data_offset": 0, 00:31:04.247 "data_size": 0 00:31:04.247 }, 00:31:04.247 { 00:31:04.247 "name": "BaseBdev3", 00:31:04.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.247 "is_configured": false, 00:31:04.247 "data_offset": 0, 00:31:04.247 "data_size": 0 00:31:04.247 } 00:31:04.247 ] 00:31:04.247 }' 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.247 14:00:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.506 [2024-10-09 14:00:11.037568] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:04.506 [2024-10-09 14:00:11.037635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.506 [2024-10-09 14:00:11.045622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:04.506 [2024-10-09 14:00:11.048183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:04.506 [2024-10-09 14:00:11.048236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:04.506 [2024-10-09 14:00:11.048251] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:04.506 [2024-10-09 14:00:11.048270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:04.506 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:04.765 "name": "Existed_Raid", 00:31:04.765 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:04.765 "strip_size_kb": 64, 00:31:04.765 "state": "configuring", 00:31:04.765 "raid_level": "concat", 00:31:04.765 "superblock": true, 00:31:04.765 "num_base_bdevs": 3, 00:31:04.765 "num_base_bdevs_discovered": 1, 00:31:04.765 "num_base_bdevs_operational": 3, 00:31:04.765 "base_bdevs_list": [ 00:31:04.765 { 00:31:04.765 "name": "BaseBdev1", 00:31:04.765 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:04.765 "is_configured": true, 00:31:04.765 "data_offset": 2048, 00:31:04.765 "data_size": 63488 00:31:04.765 }, 00:31:04.765 { 00:31:04.765 "name": "BaseBdev2", 00:31:04.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.765 "is_configured": false, 00:31:04.765 "data_offset": 0, 00:31:04.765 "data_size": 0 00:31:04.765 }, 00:31:04.765 { 00:31:04.765 "name": "BaseBdev3", 00:31:04.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.765 "is_configured": false, 00:31:04.765 "data_offset": 0, 00:31:04.765 "data_size": 0 00:31:04.765 } 00:31:04.765 ] 00:31:04.765 }' 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:04.765 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.023 [2024-10-09 14:00:11.513643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:05.023 BaseBdev2 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.023 [ 00:31:05.023 { 00:31:05.023 "name": "BaseBdev2", 00:31:05.023 "aliases": [ 00:31:05.023 "ec593ae9-c7de-47f4-8514-f97c65b14306" 00:31:05.023 ], 00:31:05.023 "product_name": "Malloc disk", 00:31:05.023 "block_size": 512, 00:31:05.023 "num_blocks": 65536, 00:31:05.023 "uuid": "ec593ae9-c7de-47f4-8514-f97c65b14306", 00:31:05.023 "assigned_rate_limits": { 00:31:05.023 "rw_ios_per_sec": 0, 00:31:05.023 "rw_mbytes_per_sec": 0, 00:31:05.023 "r_mbytes_per_sec": 0, 00:31:05.023 "w_mbytes_per_sec": 0 00:31:05.023 }, 00:31:05.023 "claimed": true, 00:31:05.023 "claim_type": "exclusive_write", 00:31:05.023 "zoned": false, 00:31:05.023 "supported_io_types": { 00:31:05.023 "read": true, 00:31:05.023 "write": true, 00:31:05.023 "unmap": true, 00:31:05.023 "flush": true, 00:31:05.023 "reset": true, 00:31:05.023 "nvme_admin": false, 00:31:05.023 "nvme_io": false, 00:31:05.023 "nvme_io_md": false, 00:31:05.023 "write_zeroes": true, 00:31:05.023 "zcopy": true, 00:31:05.023 "get_zone_info": false, 00:31:05.023 "zone_management": false, 00:31:05.023 "zone_append": false, 00:31:05.023 "compare": false, 00:31:05.023 "compare_and_write": false, 00:31:05.023 "abort": true, 00:31:05.023 "seek_hole": false, 00:31:05.023 "seek_data": false, 00:31:05.023 "copy": true, 00:31:05.023 "nvme_iov_md": false 00:31:05.023 }, 00:31:05.023 "memory_domains": [ 00:31:05.023 { 00:31:05.023 "dma_device_id": "system", 00:31:05.023 "dma_device_type": 1 00:31:05.023 }, 00:31:05.023 { 00:31:05.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.023 "dma_device_type": 2 00:31:05.023 } 00:31:05.023 ], 00:31:05.023 "driver_specific": {} 00:31:05.023 } 00:31:05.023 ] 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.023 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.282 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.282 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.282 "name": "Existed_Raid", 00:31:05.282 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:05.282 "strip_size_kb": 64, 00:31:05.282 "state": "configuring", 00:31:05.282 "raid_level": "concat", 00:31:05.282 "superblock": true, 00:31:05.282 "num_base_bdevs": 3, 00:31:05.282 "num_base_bdevs_discovered": 2, 00:31:05.282 "num_base_bdevs_operational": 3, 00:31:05.282 "base_bdevs_list": [ 00:31:05.282 { 00:31:05.282 "name": "BaseBdev1", 00:31:05.282 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:05.282 "is_configured": true, 00:31:05.282 "data_offset": 2048, 00:31:05.282 "data_size": 63488 00:31:05.282 }, 00:31:05.282 { 00:31:05.282 "name": "BaseBdev2", 00:31:05.282 "uuid": "ec593ae9-c7de-47f4-8514-f97c65b14306", 00:31:05.282 "is_configured": true, 00:31:05.282 "data_offset": 2048, 00:31:05.282 "data_size": 63488 00:31:05.282 }, 00:31:05.282 { 00:31:05.282 "name": "BaseBdev3", 00:31:05.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.282 "is_configured": false, 00:31:05.282 "data_offset": 0, 00:31:05.282 "data_size": 0 00:31:05.282 } 00:31:05.282 ] 00:31:05.282 }' 00:31:05.282 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.282 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.540 14:00:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:05.540 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.540 14:00:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.540 [2024-10-09 14:00:12.005511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:05.540 [2024-10-09 14:00:12.005768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:05.540 [2024-10-09 14:00:12.005798] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:05.540 BaseBdev3 00:31:05.540 [2024-10-09 14:00:12.006191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:05.540 [2024-10-09 14:00:12.006328] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:05.540 [2024-10-09 14:00:12.006343] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:31:05.540 [2024-10-09 14:00:12.006481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:05.540 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.541 [ 00:31:05.541 { 00:31:05.541 "name": "BaseBdev3", 00:31:05.541 "aliases": [ 00:31:05.541 "c39a13b6-7bec-4b9e-9537-826482d4925e" 00:31:05.541 ], 00:31:05.541 "product_name": "Malloc disk", 00:31:05.541 "block_size": 512, 00:31:05.541 "num_blocks": 65536, 00:31:05.541 "uuid": "c39a13b6-7bec-4b9e-9537-826482d4925e", 00:31:05.541 "assigned_rate_limits": { 00:31:05.541 "rw_ios_per_sec": 0, 00:31:05.541 "rw_mbytes_per_sec": 0, 00:31:05.541 "r_mbytes_per_sec": 0, 00:31:05.541 "w_mbytes_per_sec": 0 00:31:05.541 }, 00:31:05.541 "claimed": true, 00:31:05.541 "claim_type": "exclusive_write", 00:31:05.541 "zoned": false, 00:31:05.541 "supported_io_types": { 00:31:05.541 "read": true, 00:31:05.541 "write": true, 00:31:05.541 "unmap": true, 00:31:05.541 "flush": true, 00:31:05.541 "reset": true, 00:31:05.541 "nvme_admin": false, 00:31:05.541 "nvme_io": false, 00:31:05.541 "nvme_io_md": false, 00:31:05.541 "write_zeroes": true, 00:31:05.541 "zcopy": true, 00:31:05.541 "get_zone_info": false, 00:31:05.541 "zone_management": false, 00:31:05.541 "zone_append": false, 00:31:05.541 "compare": false, 00:31:05.541 "compare_and_write": false, 00:31:05.541 "abort": true, 00:31:05.541 "seek_hole": false, 00:31:05.541 "seek_data": false, 00:31:05.541 "copy": true, 00:31:05.541 "nvme_iov_md": false 00:31:05.541 }, 00:31:05.541 "memory_domains": [ 00:31:05.541 { 00:31:05.541 "dma_device_id": "system", 00:31:05.541 "dma_device_type": 1 00:31:05.541 }, 00:31:05.541 { 00:31:05.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:05.541 "dma_device_type": 2 00:31:05.541 } 00:31:05.541 ], 00:31:05.541 "driver_specific": {} 00:31:05.541 } 00:31:05.541 ] 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:05.541 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.800 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.800 "name": "Existed_Raid", 00:31:05.800 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:05.800 "strip_size_kb": 64, 00:31:05.800 "state": "online", 00:31:05.800 "raid_level": "concat", 00:31:05.800 "superblock": true, 00:31:05.800 "num_base_bdevs": 3, 00:31:05.800 "num_base_bdevs_discovered": 3, 00:31:05.800 "num_base_bdevs_operational": 3, 00:31:05.800 "base_bdevs_list": [ 00:31:05.800 { 00:31:05.800 "name": "BaseBdev1", 00:31:05.800 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:05.800 "is_configured": true, 00:31:05.800 "data_offset": 2048, 00:31:05.800 "data_size": 63488 00:31:05.800 }, 00:31:05.800 { 00:31:05.800 "name": "BaseBdev2", 00:31:05.800 "uuid": "ec593ae9-c7de-47f4-8514-f97c65b14306", 00:31:05.800 "is_configured": true, 00:31:05.800 "data_offset": 2048, 00:31:05.800 "data_size": 63488 00:31:05.800 }, 00:31:05.800 { 00:31:05.800 "name": "BaseBdev3", 00:31:05.800 "uuid": "c39a13b6-7bec-4b9e-9537-826482d4925e", 00:31:05.800 "is_configured": true, 00:31:05.800 "data_offset": 2048, 00:31:05.800 "data_size": 63488 00:31:05.800 } 00:31:05.800 ] 00:31:05.800 }' 00:31:05.800 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.800 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.059 [2024-10-09 14:00:12.494073] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.059 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.059 "name": "Existed_Raid", 00:31:06.059 "aliases": [ 00:31:06.059 "accf8158-2cda-4dad-8116-914d59675499" 00:31:06.059 ], 00:31:06.059 "product_name": "Raid Volume", 00:31:06.059 "block_size": 512, 00:31:06.059 "num_blocks": 190464, 00:31:06.059 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:06.059 "assigned_rate_limits": { 00:31:06.059 "rw_ios_per_sec": 0, 00:31:06.059 "rw_mbytes_per_sec": 0, 00:31:06.059 "r_mbytes_per_sec": 0, 00:31:06.059 "w_mbytes_per_sec": 0 00:31:06.059 }, 00:31:06.059 "claimed": false, 00:31:06.059 "zoned": false, 00:31:06.059 "supported_io_types": { 00:31:06.059 "read": true, 00:31:06.059 "write": true, 00:31:06.059 "unmap": true, 00:31:06.060 "flush": true, 00:31:06.060 "reset": true, 00:31:06.060 "nvme_admin": false, 00:31:06.060 "nvme_io": false, 00:31:06.060 "nvme_io_md": false, 00:31:06.060 "write_zeroes": true, 00:31:06.060 "zcopy": false, 00:31:06.060 "get_zone_info": false, 00:31:06.060 "zone_management": false, 00:31:06.060 "zone_append": false, 00:31:06.060 "compare": false, 00:31:06.060 "compare_and_write": false, 00:31:06.060 "abort": false, 00:31:06.060 "seek_hole": false, 00:31:06.060 "seek_data": false, 00:31:06.060 "copy": false, 00:31:06.060 "nvme_iov_md": false 00:31:06.060 }, 00:31:06.060 "memory_domains": [ 00:31:06.060 { 00:31:06.060 "dma_device_id": "system", 00:31:06.060 "dma_device_type": 1 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.060 "dma_device_type": 2 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "dma_device_id": "system", 00:31:06.060 "dma_device_type": 1 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.060 "dma_device_type": 2 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "dma_device_id": "system", 00:31:06.060 "dma_device_type": 1 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.060 "dma_device_type": 2 00:31:06.060 } 00:31:06.060 ], 00:31:06.060 "driver_specific": { 00:31:06.060 "raid": { 00:31:06.060 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:06.060 "strip_size_kb": 64, 00:31:06.060 "state": "online", 00:31:06.060 "raid_level": "concat", 00:31:06.060 "superblock": true, 00:31:06.060 "num_base_bdevs": 3, 00:31:06.060 "num_base_bdevs_discovered": 3, 00:31:06.060 "num_base_bdevs_operational": 3, 00:31:06.060 "base_bdevs_list": [ 00:31:06.060 { 00:31:06.060 "name": "BaseBdev1", 00:31:06.060 "uuid": "e1159de7-e2a6-48e6-abb8-4cc0c5b2b630", 00:31:06.060 "is_configured": true, 00:31:06.060 "data_offset": 2048, 00:31:06.060 "data_size": 63488 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "name": "BaseBdev2", 00:31:06.060 "uuid": "ec593ae9-c7de-47f4-8514-f97c65b14306", 00:31:06.060 "is_configured": true, 00:31:06.060 "data_offset": 2048, 00:31:06.060 "data_size": 63488 00:31:06.060 }, 00:31:06.060 { 00:31:06.060 "name": "BaseBdev3", 00:31:06.060 "uuid": "c39a13b6-7bec-4b9e-9537-826482d4925e", 00:31:06.060 "is_configured": true, 00:31:06.060 "data_offset": 2048, 00:31:06.060 "data_size": 63488 00:31:06.060 } 00:31:06.060 ] 00:31:06.060 } 00:31:06.060 } 00:31:06.060 }' 00:31:06.060 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:06.320 BaseBdev2 00:31:06.320 BaseBdev3' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.320 [2024-10-09 14:00:12.821924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:06.320 [2024-10-09 14:00:12.821956] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:06.320 [2024-10-09 14:00:12.822039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:06.320 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.579 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:06.579 "name": "Existed_Raid", 00:31:06.579 "uuid": "accf8158-2cda-4dad-8116-914d59675499", 00:31:06.579 "strip_size_kb": 64, 00:31:06.579 "state": "offline", 00:31:06.579 "raid_level": "concat", 00:31:06.579 "superblock": true, 00:31:06.579 "num_base_bdevs": 3, 00:31:06.579 "num_base_bdevs_discovered": 2, 00:31:06.579 "num_base_bdevs_operational": 2, 00:31:06.579 "base_bdevs_list": [ 00:31:06.579 { 00:31:06.579 "name": null, 00:31:06.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.579 "is_configured": false, 00:31:06.579 "data_offset": 0, 00:31:06.579 "data_size": 63488 00:31:06.579 }, 00:31:06.579 { 00:31:06.579 "name": "BaseBdev2", 00:31:06.579 "uuid": "ec593ae9-c7de-47f4-8514-f97c65b14306", 00:31:06.579 "is_configured": true, 00:31:06.579 "data_offset": 2048, 00:31:06.579 "data_size": 63488 00:31:06.579 }, 00:31:06.579 { 00:31:06.579 "name": "BaseBdev3", 00:31:06.579 "uuid": "c39a13b6-7bec-4b9e-9537-826482d4925e", 00:31:06.579 "is_configured": true, 00:31:06.579 "data_offset": 2048, 00:31:06.579 "data_size": 63488 00:31:06.579 } 00:31:06.579 ] 00:31:06.579 }' 00:31:06.579 14:00:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:06.579 14:00:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.838 [2024-10-09 14:00:13.334971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:06.838 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.097 [2024-10-09 14:00:13.399777] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:07.097 [2024-10-09 14:00:13.399829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:07.097 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 BaseBdev2 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 [ 00:31:07.098 { 00:31:07.098 "name": "BaseBdev2", 00:31:07.098 "aliases": [ 00:31:07.098 "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b" 00:31:07.098 ], 00:31:07.098 "product_name": "Malloc disk", 00:31:07.098 "block_size": 512, 00:31:07.098 "num_blocks": 65536, 00:31:07.098 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:07.098 "assigned_rate_limits": { 00:31:07.098 "rw_ios_per_sec": 0, 00:31:07.098 "rw_mbytes_per_sec": 0, 00:31:07.098 "r_mbytes_per_sec": 0, 00:31:07.098 "w_mbytes_per_sec": 0 00:31:07.098 }, 00:31:07.098 "claimed": false, 00:31:07.098 "zoned": false, 00:31:07.098 "supported_io_types": { 00:31:07.098 "read": true, 00:31:07.098 "write": true, 00:31:07.098 "unmap": true, 00:31:07.098 "flush": true, 00:31:07.098 "reset": true, 00:31:07.098 "nvme_admin": false, 00:31:07.098 "nvme_io": false, 00:31:07.098 "nvme_io_md": false, 00:31:07.098 "write_zeroes": true, 00:31:07.098 "zcopy": true, 00:31:07.098 "get_zone_info": false, 00:31:07.098 "zone_management": false, 00:31:07.098 "zone_append": false, 00:31:07.098 "compare": false, 00:31:07.098 "compare_and_write": false, 00:31:07.098 "abort": true, 00:31:07.098 "seek_hole": false, 00:31:07.098 "seek_data": false, 00:31:07.098 "copy": true, 00:31:07.098 "nvme_iov_md": false 00:31:07.098 }, 00:31:07.098 "memory_domains": [ 00:31:07.098 { 00:31:07.098 "dma_device_id": "system", 00:31:07.098 "dma_device_type": 1 00:31:07.098 }, 00:31:07.098 { 00:31:07.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:07.098 "dma_device_type": 2 00:31:07.098 } 00:31:07.098 ], 00:31:07.098 "driver_specific": {} 00:31:07.098 } 00:31:07.098 ] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 BaseBdev3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 [ 00:31:07.098 { 00:31:07.098 "name": "BaseBdev3", 00:31:07.098 "aliases": [ 00:31:07.098 "79623843-cacf-4413-b892-cede447d6846" 00:31:07.098 ], 00:31:07.098 "product_name": "Malloc disk", 00:31:07.098 "block_size": 512, 00:31:07.098 "num_blocks": 65536, 00:31:07.098 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:07.098 "assigned_rate_limits": { 00:31:07.098 "rw_ios_per_sec": 0, 00:31:07.098 "rw_mbytes_per_sec": 0, 00:31:07.098 "r_mbytes_per_sec": 0, 00:31:07.098 "w_mbytes_per_sec": 0 00:31:07.098 }, 00:31:07.098 "claimed": false, 00:31:07.098 "zoned": false, 00:31:07.098 "supported_io_types": { 00:31:07.098 "read": true, 00:31:07.098 "write": true, 00:31:07.098 "unmap": true, 00:31:07.098 "flush": true, 00:31:07.098 "reset": true, 00:31:07.098 "nvme_admin": false, 00:31:07.098 "nvme_io": false, 00:31:07.098 "nvme_io_md": false, 00:31:07.098 "write_zeroes": true, 00:31:07.098 "zcopy": true, 00:31:07.098 "get_zone_info": false, 00:31:07.098 "zone_management": false, 00:31:07.098 "zone_append": false, 00:31:07.098 "compare": false, 00:31:07.098 "compare_and_write": false, 00:31:07.098 "abort": true, 00:31:07.098 "seek_hole": false, 00:31:07.098 "seek_data": false, 00:31:07.098 "copy": true, 00:31:07.098 "nvme_iov_md": false 00:31:07.098 }, 00:31:07.098 "memory_domains": [ 00:31:07.098 { 00:31:07.098 "dma_device_id": "system", 00:31:07.098 "dma_device_type": 1 00:31:07.098 }, 00:31:07.098 { 00:31:07.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:07.098 "dma_device_type": 2 00:31:07.098 } 00:31:07.098 ], 00:31:07.098 "driver_specific": {} 00:31:07.098 } 00:31:07.098 ] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 [2024-10-09 14:00:13.587761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:07.098 [2024-10-09 14:00:13.587935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:07.098 [2024-10-09 14:00:13.587977] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:07.098 [2024-10-09 14:00:13.590428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.098 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.098 "name": "Existed_Raid", 00:31:07.098 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:07.098 "strip_size_kb": 64, 00:31:07.098 "state": "configuring", 00:31:07.098 "raid_level": "concat", 00:31:07.098 "superblock": true, 00:31:07.099 "num_base_bdevs": 3, 00:31:07.099 "num_base_bdevs_discovered": 2, 00:31:07.099 "num_base_bdevs_operational": 3, 00:31:07.099 "base_bdevs_list": [ 00:31:07.099 { 00:31:07.099 "name": "BaseBdev1", 00:31:07.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.099 "is_configured": false, 00:31:07.099 "data_offset": 0, 00:31:07.099 "data_size": 0 00:31:07.099 }, 00:31:07.099 { 00:31:07.099 "name": "BaseBdev2", 00:31:07.099 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:07.099 "is_configured": true, 00:31:07.099 "data_offset": 2048, 00:31:07.099 "data_size": 63488 00:31:07.099 }, 00:31:07.099 { 00:31:07.099 "name": "BaseBdev3", 00:31:07.099 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:07.099 "is_configured": true, 00:31:07.099 "data_offset": 2048, 00:31:07.099 "data_size": 63488 00:31:07.099 } 00:31:07.099 ] 00:31:07.099 }' 00:31:07.099 14:00:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.099 14:00:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.666 [2024-10-09 14:00:14.091903] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.666 "name": "Existed_Raid", 00:31:07.666 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:07.666 "strip_size_kb": 64, 00:31:07.666 "state": "configuring", 00:31:07.666 "raid_level": "concat", 00:31:07.666 "superblock": true, 00:31:07.666 "num_base_bdevs": 3, 00:31:07.666 "num_base_bdevs_discovered": 1, 00:31:07.666 "num_base_bdevs_operational": 3, 00:31:07.666 "base_bdevs_list": [ 00:31:07.666 { 00:31:07.666 "name": "BaseBdev1", 00:31:07.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.666 "is_configured": false, 00:31:07.666 "data_offset": 0, 00:31:07.666 "data_size": 0 00:31:07.666 }, 00:31:07.666 { 00:31:07.666 "name": null, 00:31:07.666 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:07.666 "is_configured": false, 00:31:07.666 "data_offset": 0, 00:31:07.666 "data_size": 63488 00:31:07.666 }, 00:31:07.666 { 00:31:07.666 "name": "BaseBdev3", 00:31:07.666 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:07.666 "is_configured": true, 00:31:07.666 "data_offset": 2048, 00:31:07.666 "data_size": 63488 00:31:07.666 } 00:31:07.666 ] 00:31:07.666 }' 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.666 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.234 [2024-10-09 14:00:14.623254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:08.234 BaseBdev1 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.234 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.234 [ 00:31:08.234 { 00:31:08.234 "name": "BaseBdev1", 00:31:08.234 "aliases": [ 00:31:08.234 "467b1be1-151b-4fde-9677-88327c302e17" 00:31:08.234 ], 00:31:08.234 "product_name": "Malloc disk", 00:31:08.234 "block_size": 512, 00:31:08.234 "num_blocks": 65536, 00:31:08.234 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:08.234 "assigned_rate_limits": { 00:31:08.234 "rw_ios_per_sec": 0, 00:31:08.234 "rw_mbytes_per_sec": 0, 00:31:08.234 "r_mbytes_per_sec": 0, 00:31:08.234 "w_mbytes_per_sec": 0 00:31:08.234 }, 00:31:08.234 "claimed": true, 00:31:08.235 "claim_type": "exclusive_write", 00:31:08.235 "zoned": false, 00:31:08.235 "supported_io_types": { 00:31:08.235 "read": true, 00:31:08.235 "write": true, 00:31:08.235 "unmap": true, 00:31:08.235 "flush": true, 00:31:08.235 "reset": true, 00:31:08.235 "nvme_admin": false, 00:31:08.235 "nvme_io": false, 00:31:08.235 "nvme_io_md": false, 00:31:08.235 "write_zeroes": true, 00:31:08.235 "zcopy": true, 00:31:08.235 "get_zone_info": false, 00:31:08.235 "zone_management": false, 00:31:08.235 "zone_append": false, 00:31:08.235 "compare": false, 00:31:08.235 "compare_and_write": false, 00:31:08.235 "abort": true, 00:31:08.235 "seek_hole": false, 00:31:08.235 "seek_data": false, 00:31:08.235 "copy": true, 00:31:08.235 "nvme_iov_md": false 00:31:08.235 }, 00:31:08.235 "memory_domains": [ 00:31:08.235 { 00:31:08.235 "dma_device_id": "system", 00:31:08.235 "dma_device_type": 1 00:31:08.235 }, 00:31:08.235 { 00:31:08.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:08.235 "dma_device_type": 2 00:31:08.235 } 00:31:08.235 ], 00:31:08.235 "driver_specific": {} 00:31:08.235 } 00:31:08.235 ] 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:08.235 "name": "Existed_Raid", 00:31:08.235 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:08.235 "strip_size_kb": 64, 00:31:08.235 "state": "configuring", 00:31:08.235 "raid_level": "concat", 00:31:08.235 "superblock": true, 00:31:08.235 "num_base_bdevs": 3, 00:31:08.235 "num_base_bdevs_discovered": 2, 00:31:08.235 "num_base_bdevs_operational": 3, 00:31:08.235 "base_bdevs_list": [ 00:31:08.235 { 00:31:08.235 "name": "BaseBdev1", 00:31:08.235 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:08.235 "is_configured": true, 00:31:08.235 "data_offset": 2048, 00:31:08.235 "data_size": 63488 00:31:08.235 }, 00:31:08.235 { 00:31:08.235 "name": null, 00:31:08.235 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:08.235 "is_configured": false, 00:31:08.235 "data_offset": 0, 00:31:08.235 "data_size": 63488 00:31:08.235 }, 00:31:08.235 { 00:31:08.235 "name": "BaseBdev3", 00:31:08.235 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:08.235 "is_configured": true, 00:31:08.235 "data_offset": 2048, 00:31:08.235 "data_size": 63488 00:31:08.235 } 00:31:08.235 ] 00:31:08.235 }' 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:08.235 14:00:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.803 [2024-10-09 14:00:15.179436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:08.803 "name": "Existed_Raid", 00:31:08.803 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:08.803 "strip_size_kb": 64, 00:31:08.803 "state": "configuring", 00:31:08.803 "raid_level": "concat", 00:31:08.803 "superblock": true, 00:31:08.803 "num_base_bdevs": 3, 00:31:08.803 "num_base_bdevs_discovered": 1, 00:31:08.803 "num_base_bdevs_operational": 3, 00:31:08.803 "base_bdevs_list": [ 00:31:08.803 { 00:31:08.803 "name": "BaseBdev1", 00:31:08.803 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:08.803 "is_configured": true, 00:31:08.803 "data_offset": 2048, 00:31:08.803 "data_size": 63488 00:31:08.803 }, 00:31:08.803 { 00:31:08.803 "name": null, 00:31:08.803 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:08.803 "is_configured": false, 00:31:08.803 "data_offset": 0, 00:31:08.803 "data_size": 63488 00:31:08.803 }, 00:31:08.803 { 00:31:08.803 "name": null, 00:31:08.803 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:08.803 "is_configured": false, 00:31:08.803 "data_offset": 0, 00:31:08.803 "data_size": 63488 00:31:08.803 } 00:31:08.803 ] 00:31:08.803 }' 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:08.803 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.371 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.372 [2024-10-09 14:00:15.707639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.372 "name": "Existed_Raid", 00:31:09.372 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:09.372 "strip_size_kb": 64, 00:31:09.372 "state": "configuring", 00:31:09.372 "raid_level": "concat", 00:31:09.372 "superblock": true, 00:31:09.372 "num_base_bdevs": 3, 00:31:09.372 "num_base_bdevs_discovered": 2, 00:31:09.372 "num_base_bdevs_operational": 3, 00:31:09.372 "base_bdevs_list": [ 00:31:09.372 { 00:31:09.372 "name": "BaseBdev1", 00:31:09.372 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:09.372 "is_configured": true, 00:31:09.372 "data_offset": 2048, 00:31:09.372 "data_size": 63488 00:31:09.372 }, 00:31:09.372 { 00:31:09.372 "name": null, 00:31:09.372 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:09.372 "is_configured": false, 00:31:09.372 "data_offset": 0, 00:31:09.372 "data_size": 63488 00:31:09.372 }, 00:31:09.372 { 00:31:09.372 "name": "BaseBdev3", 00:31:09.372 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:09.372 "is_configured": true, 00:31:09.372 "data_offset": 2048, 00:31:09.372 "data_size": 63488 00:31:09.372 } 00:31:09.372 ] 00:31:09.372 }' 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.372 14:00:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 [2024-10-09 14:00:16.279822] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:09.940 "name": "Existed_Raid", 00:31:09.940 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:09.940 "strip_size_kb": 64, 00:31:09.940 "state": "configuring", 00:31:09.940 "raid_level": "concat", 00:31:09.940 "superblock": true, 00:31:09.940 "num_base_bdevs": 3, 00:31:09.940 "num_base_bdevs_discovered": 1, 00:31:09.940 "num_base_bdevs_operational": 3, 00:31:09.940 "base_bdevs_list": [ 00:31:09.940 { 00:31:09.940 "name": null, 00:31:09.940 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:09.940 "is_configured": false, 00:31:09.940 "data_offset": 0, 00:31:09.940 "data_size": 63488 00:31:09.940 }, 00:31:09.940 { 00:31:09.940 "name": null, 00:31:09.940 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:09.940 "is_configured": false, 00:31:09.940 "data_offset": 0, 00:31:09.940 "data_size": 63488 00:31:09.940 }, 00:31:09.940 { 00:31:09.940 "name": "BaseBdev3", 00:31:09.940 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:09.940 "is_configured": true, 00:31:09.940 "data_offset": 2048, 00:31:09.940 "data_size": 63488 00:31:09.940 } 00:31:09.940 ] 00:31:09.940 }' 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:09.940 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.508 [2024-10-09 14:00:16.796065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.508 "name": "Existed_Raid", 00:31:10.508 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:10.508 "strip_size_kb": 64, 00:31:10.508 "state": "configuring", 00:31:10.508 "raid_level": "concat", 00:31:10.508 "superblock": true, 00:31:10.508 "num_base_bdevs": 3, 00:31:10.508 "num_base_bdevs_discovered": 2, 00:31:10.508 "num_base_bdevs_operational": 3, 00:31:10.508 "base_bdevs_list": [ 00:31:10.508 { 00:31:10.508 "name": null, 00:31:10.508 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:10.508 "is_configured": false, 00:31:10.508 "data_offset": 0, 00:31:10.508 "data_size": 63488 00:31:10.508 }, 00:31:10.508 { 00:31:10.508 "name": "BaseBdev2", 00:31:10.508 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:10.508 "is_configured": true, 00:31:10.508 "data_offset": 2048, 00:31:10.508 "data_size": 63488 00:31:10.508 }, 00:31:10.508 { 00:31:10.508 "name": "BaseBdev3", 00:31:10.508 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:10.508 "is_configured": true, 00:31:10.508 "data_offset": 2048, 00:31:10.508 "data_size": 63488 00:31:10.508 } 00:31:10.508 ] 00:31:10.508 }' 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.508 14:00:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.767 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 467b1be1-151b-4fde-9677-88327c302e17 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.026 [2024-10-09 14:00:17.371593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:11.026 [2024-10-09 14:00:17.371766] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:11.026 [2024-10-09 14:00:17.371784] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:11.026 NewBaseBdev 00:31:11.026 [2024-10-09 14:00:17.372050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:11.026 [2024-10-09 14:00:17.372160] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:11.026 [2024-10-09 14:00:17.372171] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:31:11.026 [2024-10-09 14:00:17.372295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.026 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.027 [ 00:31:11.027 { 00:31:11.027 "name": "NewBaseBdev", 00:31:11.027 "aliases": [ 00:31:11.027 "467b1be1-151b-4fde-9677-88327c302e17" 00:31:11.027 ], 00:31:11.027 "product_name": "Malloc disk", 00:31:11.027 "block_size": 512, 00:31:11.027 "num_blocks": 65536, 00:31:11.027 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:11.027 "assigned_rate_limits": { 00:31:11.027 "rw_ios_per_sec": 0, 00:31:11.027 "rw_mbytes_per_sec": 0, 00:31:11.027 "r_mbytes_per_sec": 0, 00:31:11.027 "w_mbytes_per_sec": 0 00:31:11.027 }, 00:31:11.027 "claimed": true, 00:31:11.027 "claim_type": "exclusive_write", 00:31:11.027 "zoned": false, 00:31:11.027 "supported_io_types": { 00:31:11.027 "read": true, 00:31:11.027 "write": true, 00:31:11.027 "unmap": true, 00:31:11.027 "flush": true, 00:31:11.027 "reset": true, 00:31:11.027 "nvme_admin": false, 00:31:11.027 "nvme_io": false, 00:31:11.027 "nvme_io_md": false, 00:31:11.027 "write_zeroes": true, 00:31:11.027 "zcopy": true, 00:31:11.027 "get_zone_info": false, 00:31:11.027 "zone_management": false, 00:31:11.027 "zone_append": false, 00:31:11.027 "compare": false, 00:31:11.027 "compare_and_write": false, 00:31:11.027 "abort": true, 00:31:11.027 "seek_hole": false, 00:31:11.027 "seek_data": false, 00:31:11.027 "copy": true, 00:31:11.027 "nvme_iov_md": false 00:31:11.027 }, 00:31:11.027 "memory_domains": [ 00:31:11.027 { 00:31:11.027 "dma_device_id": "system", 00:31:11.027 "dma_device_type": 1 00:31:11.027 }, 00:31:11.027 { 00:31:11.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.027 "dma_device_type": 2 00:31:11.027 } 00:31:11.027 ], 00:31:11.027 "driver_specific": {} 00:31:11.027 } 00:31:11.027 ] 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:11.027 "name": "Existed_Raid", 00:31:11.027 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:11.027 "strip_size_kb": 64, 00:31:11.027 "state": "online", 00:31:11.027 "raid_level": "concat", 00:31:11.027 "superblock": true, 00:31:11.027 "num_base_bdevs": 3, 00:31:11.027 "num_base_bdevs_discovered": 3, 00:31:11.027 "num_base_bdevs_operational": 3, 00:31:11.027 "base_bdevs_list": [ 00:31:11.027 { 00:31:11.027 "name": "NewBaseBdev", 00:31:11.027 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:11.027 "is_configured": true, 00:31:11.027 "data_offset": 2048, 00:31:11.027 "data_size": 63488 00:31:11.027 }, 00:31:11.027 { 00:31:11.027 "name": "BaseBdev2", 00:31:11.027 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:11.027 "is_configured": true, 00:31:11.027 "data_offset": 2048, 00:31:11.027 "data_size": 63488 00:31:11.027 }, 00:31:11.027 { 00:31:11.027 "name": "BaseBdev3", 00:31:11.027 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:11.027 "is_configured": true, 00:31:11.027 "data_offset": 2048, 00:31:11.027 "data_size": 63488 00:31:11.027 } 00:31:11.027 ] 00:31:11.027 }' 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:11.027 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:11.595 [2024-10-09 14:00:17.880107] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:11.595 "name": "Existed_Raid", 00:31:11.595 "aliases": [ 00:31:11.595 "6e7f70d9-5f6f-47fc-8948-c530136a2760" 00:31:11.595 ], 00:31:11.595 "product_name": "Raid Volume", 00:31:11.595 "block_size": 512, 00:31:11.595 "num_blocks": 190464, 00:31:11.595 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:11.595 "assigned_rate_limits": { 00:31:11.595 "rw_ios_per_sec": 0, 00:31:11.595 "rw_mbytes_per_sec": 0, 00:31:11.595 "r_mbytes_per_sec": 0, 00:31:11.595 "w_mbytes_per_sec": 0 00:31:11.595 }, 00:31:11.595 "claimed": false, 00:31:11.595 "zoned": false, 00:31:11.595 "supported_io_types": { 00:31:11.595 "read": true, 00:31:11.595 "write": true, 00:31:11.595 "unmap": true, 00:31:11.595 "flush": true, 00:31:11.595 "reset": true, 00:31:11.595 "nvme_admin": false, 00:31:11.595 "nvme_io": false, 00:31:11.595 "nvme_io_md": false, 00:31:11.595 "write_zeroes": true, 00:31:11.595 "zcopy": false, 00:31:11.595 "get_zone_info": false, 00:31:11.595 "zone_management": false, 00:31:11.595 "zone_append": false, 00:31:11.595 "compare": false, 00:31:11.595 "compare_and_write": false, 00:31:11.595 "abort": false, 00:31:11.595 "seek_hole": false, 00:31:11.595 "seek_data": false, 00:31:11.595 "copy": false, 00:31:11.595 "nvme_iov_md": false 00:31:11.595 }, 00:31:11.595 "memory_domains": [ 00:31:11.595 { 00:31:11.595 "dma_device_id": "system", 00:31:11.595 "dma_device_type": 1 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.595 "dma_device_type": 2 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "dma_device_id": "system", 00:31:11.595 "dma_device_type": 1 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.595 "dma_device_type": 2 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "dma_device_id": "system", 00:31:11.595 "dma_device_type": 1 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.595 "dma_device_type": 2 00:31:11.595 } 00:31:11.595 ], 00:31:11.595 "driver_specific": { 00:31:11.595 "raid": { 00:31:11.595 "uuid": "6e7f70d9-5f6f-47fc-8948-c530136a2760", 00:31:11.595 "strip_size_kb": 64, 00:31:11.595 "state": "online", 00:31:11.595 "raid_level": "concat", 00:31:11.595 "superblock": true, 00:31:11.595 "num_base_bdevs": 3, 00:31:11.595 "num_base_bdevs_discovered": 3, 00:31:11.595 "num_base_bdevs_operational": 3, 00:31:11.595 "base_bdevs_list": [ 00:31:11.595 { 00:31:11.595 "name": "NewBaseBdev", 00:31:11.595 "uuid": "467b1be1-151b-4fde-9677-88327c302e17", 00:31:11.595 "is_configured": true, 00:31:11.595 "data_offset": 2048, 00:31:11.595 "data_size": 63488 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "name": "BaseBdev2", 00:31:11.595 "uuid": "82abe659-1a36-4a4c-a5b6-b6d0bcd8820b", 00:31:11.595 "is_configured": true, 00:31:11.595 "data_offset": 2048, 00:31:11.595 "data_size": 63488 00:31:11.595 }, 00:31:11.595 { 00:31:11.595 "name": "BaseBdev3", 00:31:11.595 "uuid": "79623843-cacf-4413-b892-cede447d6846", 00:31:11.595 "is_configured": true, 00:31:11.595 "data_offset": 2048, 00:31:11.595 "data_size": 63488 00:31:11.595 } 00:31:11.595 ] 00:31:11.595 } 00:31:11.595 } 00:31:11.595 }' 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:11.595 BaseBdev2 00:31:11.595 BaseBdev3' 00:31:11.595 14:00:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:11.595 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:11.855 [2024-10-09 14:00:18.159863] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:11.855 [2024-10-09 14:00:18.159891] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:11.855 [2024-10-09 14:00:18.159962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:11.855 [2024-10-09 14:00:18.160017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:11.855 [2024-10-09 14:00:18.160031] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77674 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77674 ']' 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77674 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77674 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77674' 00:31:11.855 killing process with pid 77674 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77674 00:31:11.855 [2024-10-09 14:00:18.209087] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:11.855 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77674 00:31:11.855 [2024-10-09 14:00:18.241539] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:12.114 14:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:12.114 00:31:12.114 real 0m9.586s 00:31:12.114 user 0m16.533s 00:31:12.114 sys 0m1.966s 00:31:12.114 ************************************ 00:31:12.114 END TEST raid_state_function_test_sb 00:31:12.114 ************************************ 00:31:12.114 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:12.114 14:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:12.114 14:00:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:31:12.114 14:00:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:12.114 14:00:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:12.114 14:00:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:12.114 ************************************ 00:31:12.114 START TEST raid_superblock_test 00:31:12.114 ************************************ 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:31:12.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78290 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78290 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78290 ']' 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:12.114 14:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.380 [2024-10-09 14:00:18.673959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:12.380 [2024-10-09 14:00:18.674186] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78290 ] 00:31:12.380 [2024-10-09 14:00:18.858306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.380 [2024-10-09 14:00:18.908029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.645 [2024-10-09 14:00:18.953676] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:12.645 [2024-10-09 14:00:18.953960] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 malloc1 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 [2024-10-09 14:00:19.627066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:13.212 [2024-10-09 14:00:19.627301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.212 [2024-10-09 14:00:19.627361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:13.212 [2024-10-09 14:00:19.627465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.212 [2024-10-09 14:00:19.630116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.212 [2024-10-09 14:00:19.630278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:13.212 pt1 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 malloc2 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 [2024-10-09 14:00:19.665728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:13.212 [2024-10-09 14:00:19.665808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.212 [2024-10-09 14:00:19.665835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:13.212 [2024-10-09 14:00:19.665855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.212 [2024-10-09 14:00:19.668563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.212 [2024-10-09 14:00:19.668603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:13.212 pt2 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 malloc3 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.212 [2024-10-09 14:00:19.690770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:13.212 [2024-10-09 14:00:19.690827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.212 [2024-10-09 14:00:19.690849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:13.212 [2024-10-09 14:00:19.690863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.212 [2024-10-09 14:00:19.693459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.212 [2024-10-09 14:00:19.693502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:13.212 pt3 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:13.212 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.213 [2024-10-09 14:00:19.702830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:13.213 [2024-10-09 14:00:19.705194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:13.213 [2024-10-09 14:00:19.705259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:13.213 [2024-10-09 14:00:19.705401] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:31:13.213 [2024-10-09 14:00:19.705413] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:13.213 [2024-10-09 14:00:19.705707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:13.213 [2024-10-09 14:00:19.705855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:31:13.213 [2024-10-09 14:00:19.705873] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:31:13.213 [2024-10-09 14:00:19.706017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:13.213 "name": "raid_bdev1", 00:31:13.213 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:13.213 "strip_size_kb": 64, 00:31:13.213 "state": "online", 00:31:13.213 "raid_level": "concat", 00:31:13.213 "superblock": true, 00:31:13.213 "num_base_bdevs": 3, 00:31:13.213 "num_base_bdevs_discovered": 3, 00:31:13.213 "num_base_bdevs_operational": 3, 00:31:13.213 "base_bdevs_list": [ 00:31:13.213 { 00:31:13.213 "name": "pt1", 00:31:13.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:13.213 "is_configured": true, 00:31:13.213 "data_offset": 2048, 00:31:13.213 "data_size": 63488 00:31:13.213 }, 00:31:13.213 { 00:31:13.213 "name": "pt2", 00:31:13.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:13.213 "is_configured": true, 00:31:13.213 "data_offset": 2048, 00:31:13.213 "data_size": 63488 00:31:13.213 }, 00:31:13.213 { 00:31:13.213 "name": "pt3", 00:31:13.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:13.213 "is_configured": true, 00:31:13.213 "data_offset": 2048, 00:31:13.213 "data_size": 63488 00:31:13.213 } 00:31:13.213 ] 00:31:13.213 }' 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:13.213 14:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.780 [2024-10-09 14:00:20.175256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.780 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:13.780 "name": "raid_bdev1", 00:31:13.780 "aliases": [ 00:31:13.780 "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f" 00:31:13.780 ], 00:31:13.780 "product_name": "Raid Volume", 00:31:13.780 "block_size": 512, 00:31:13.780 "num_blocks": 190464, 00:31:13.780 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:13.780 "assigned_rate_limits": { 00:31:13.780 "rw_ios_per_sec": 0, 00:31:13.780 "rw_mbytes_per_sec": 0, 00:31:13.780 "r_mbytes_per_sec": 0, 00:31:13.780 "w_mbytes_per_sec": 0 00:31:13.780 }, 00:31:13.780 "claimed": false, 00:31:13.780 "zoned": false, 00:31:13.780 "supported_io_types": { 00:31:13.780 "read": true, 00:31:13.780 "write": true, 00:31:13.780 "unmap": true, 00:31:13.780 "flush": true, 00:31:13.780 "reset": true, 00:31:13.780 "nvme_admin": false, 00:31:13.780 "nvme_io": false, 00:31:13.780 "nvme_io_md": false, 00:31:13.780 "write_zeroes": true, 00:31:13.780 "zcopy": false, 00:31:13.780 "get_zone_info": false, 00:31:13.780 "zone_management": false, 00:31:13.780 "zone_append": false, 00:31:13.780 "compare": false, 00:31:13.780 "compare_and_write": false, 00:31:13.780 "abort": false, 00:31:13.780 "seek_hole": false, 00:31:13.780 "seek_data": false, 00:31:13.780 "copy": false, 00:31:13.780 "nvme_iov_md": false 00:31:13.780 }, 00:31:13.781 "memory_domains": [ 00:31:13.781 { 00:31:13.781 "dma_device_id": "system", 00:31:13.781 "dma_device_type": 1 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.781 "dma_device_type": 2 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "dma_device_id": "system", 00:31:13.781 "dma_device_type": 1 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.781 "dma_device_type": 2 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "dma_device_id": "system", 00:31:13.781 "dma_device_type": 1 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:13.781 "dma_device_type": 2 00:31:13.781 } 00:31:13.781 ], 00:31:13.781 "driver_specific": { 00:31:13.781 "raid": { 00:31:13.781 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:13.781 "strip_size_kb": 64, 00:31:13.781 "state": "online", 00:31:13.781 "raid_level": "concat", 00:31:13.781 "superblock": true, 00:31:13.781 "num_base_bdevs": 3, 00:31:13.781 "num_base_bdevs_discovered": 3, 00:31:13.781 "num_base_bdevs_operational": 3, 00:31:13.781 "base_bdevs_list": [ 00:31:13.781 { 00:31:13.781 "name": "pt1", 00:31:13.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:13.781 "is_configured": true, 00:31:13.781 "data_offset": 2048, 00:31:13.781 "data_size": 63488 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "name": "pt2", 00:31:13.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:13.781 "is_configured": true, 00:31:13.781 "data_offset": 2048, 00:31:13.781 "data_size": 63488 00:31:13.781 }, 00:31:13.781 { 00:31:13.781 "name": "pt3", 00:31:13.781 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:13.781 "is_configured": true, 00:31:13.781 "data_offset": 2048, 00:31:13.781 "data_size": 63488 00:31:13.781 } 00:31:13.781 ] 00:31:13.781 } 00:31:13.781 } 00:31:13.781 }' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:13.781 pt2 00:31:13.781 pt3' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.781 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:14.070 [2024-10-09 14:00:20.447256] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bebbc7c5-c7db-47c9-baf1-9c8cad6e343f 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bebbc7c5-c7db-47c9-baf1-9c8cad6e343f ']' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 [2024-10-09 14:00:20.486976] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:14.070 [2024-10-09 14:00:20.487008] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:14.070 [2024-10-09 14:00:20.487100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:14.070 [2024-10-09 14:00:20.487179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:14.070 [2024-10-09 14:00:20.487201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:14.070 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.351 [2024-10-09 14:00:20.627036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:14.351 [2024-10-09 14:00:20.629762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:14.351 [2024-10-09 14:00:20.629812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:14.351 [2024-10-09 14:00:20.629872] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:14.351 [2024-10-09 14:00:20.629934] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:14.351 [2024-10-09 14:00:20.629965] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:14.351 [2024-10-09 14:00:20.629985] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:14.351 [2024-10-09 14:00:20.630008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:31:14.351 request: 00:31:14.351 { 00:31:14.351 "name": "raid_bdev1", 00:31:14.351 "raid_level": "concat", 00:31:14.351 "base_bdevs": [ 00:31:14.351 "malloc1", 00:31:14.351 "malloc2", 00:31:14.351 "malloc3" 00:31:14.351 ], 00:31:14.351 "strip_size_kb": 64, 00:31:14.351 "superblock": false, 00:31:14.351 "method": "bdev_raid_create", 00:31:14.351 "req_id": 1 00:31:14.351 } 00:31:14.351 Got JSON-RPC error response 00:31:14.351 response: 00:31:14.351 { 00:31:14.351 "code": -17, 00:31:14.351 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:14.351 } 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.351 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.352 [2024-10-09 14:00:20.690991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:14.352 [2024-10-09 14:00:20.691181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:14.352 [2024-10-09 14:00:20.691300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:14.352 [2024-10-09 14:00:20.691389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:14.352 [2024-10-09 14:00:20.694313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:14.352 [2024-10-09 14:00:20.694464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:14.352 [2024-10-09 14:00:20.694654] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:14.352 [2024-10-09 14:00:20.694803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:14.352 pt1 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.352 "name": "raid_bdev1", 00:31:14.352 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:14.352 "strip_size_kb": 64, 00:31:14.352 "state": "configuring", 00:31:14.352 "raid_level": "concat", 00:31:14.352 "superblock": true, 00:31:14.352 "num_base_bdevs": 3, 00:31:14.352 "num_base_bdevs_discovered": 1, 00:31:14.352 "num_base_bdevs_operational": 3, 00:31:14.352 "base_bdevs_list": [ 00:31:14.352 { 00:31:14.352 "name": "pt1", 00:31:14.352 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:14.352 "is_configured": true, 00:31:14.352 "data_offset": 2048, 00:31:14.352 "data_size": 63488 00:31:14.352 }, 00:31:14.352 { 00:31:14.352 "name": null, 00:31:14.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:14.352 "is_configured": false, 00:31:14.352 "data_offset": 2048, 00:31:14.352 "data_size": 63488 00:31:14.352 }, 00:31:14.352 { 00:31:14.352 "name": null, 00:31:14.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:14.352 "is_configured": false, 00:31:14.352 "data_offset": 2048, 00:31:14.352 "data_size": 63488 00:31:14.352 } 00:31:14.352 ] 00:31:14.352 }' 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.352 14:00:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.610 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:14.610 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.611 [2024-10-09 14:00:21.127232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:14.611 [2024-10-09 14:00:21.127310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:14.611 [2024-10-09 14:00:21.127346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:14.611 [2024-10-09 14:00:21.127382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:14.611 [2024-10-09 14:00:21.127882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:14.611 [2024-10-09 14:00:21.127906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:14.611 [2024-10-09 14:00:21.127986] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:14.611 [2024-10-09 14:00:21.128012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:14.611 pt2 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.611 [2024-10-09 14:00:21.135233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:14.611 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.870 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:14.870 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:14.870 "name": "raid_bdev1", 00:31:14.870 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:14.870 "strip_size_kb": 64, 00:31:14.870 "state": "configuring", 00:31:14.870 "raid_level": "concat", 00:31:14.870 "superblock": true, 00:31:14.870 "num_base_bdevs": 3, 00:31:14.870 "num_base_bdevs_discovered": 1, 00:31:14.870 "num_base_bdevs_operational": 3, 00:31:14.870 "base_bdevs_list": [ 00:31:14.870 { 00:31:14.870 "name": "pt1", 00:31:14.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:14.870 "is_configured": true, 00:31:14.870 "data_offset": 2048, 00:31:14.870 "data_size": 63488 00:31:14.870 }, 00:31:14.870 { 00:31:14.870 "name": null, 00:31:14.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:14.870 "is_configured": false, 00:31:14.870 "data_offset": 0, 00:31:14.870 "data_size": 63488 00:31:14.870 }, 00:31:14.870 { 00:31:14.870 "name": null, 00:31:14.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:14.870 "is_configured": false, 00:31:14.870 "data_offset": 2048, 00:31:14.870 "data_size": 63488 00:31:14.870 } 00:31:14.870 ] 00:31:14.870 }' 00:31:14.870 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:14.870 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.129 [2024-10-09 14:00:21.595335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:15.129 [2024-10-09 14:00:21.595405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.129 [2024-10-09 14:00:21.595430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:15.129 [2024-10-09 14:00:21.595442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.129 [2024-10-09 14:00:21.595896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.129 [2024-10-09 14:00:21.595915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:15.129 [2024-10-09 14:00:21.596011] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:15.129 [2024-10-09 14:00:21.596035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:15.129 pt2 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.129 [2024-10-09 14:00:21.603305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:15.129 [2024-10-09 14:00:21.603358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.129 [2024-10-09 14:00:21.603382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:15.129 [2024-10-09 14:00:21.603393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.129 [2024-10-09 14:00:21.603777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.129 [2024-10-09 14:00:21.603796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:15.129 [2024-10-09 14:00:21.603861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:15.129 [2024-10-09 14:00:21.603880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:15.129 [2024-10-09 14:00:21.603985] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:15.129 [2024-10-09 14:00:21.603995] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:15.129 [2024-10-09 14:00:21.604243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:15.129 [2024-10-09 14:00:21.604345] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:15.129 [2024-10-09 14:00:21.604358] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:31:15.129 [2024-10-09 14:00:21.604458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.129 pt3 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:15.129 "name": "raid_bdev1", 00:31:15.129 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:15.129 "strip_size_kb": 64, 00:31:15.129 "state": "online", 00:31:15.129 "raid_level": "concat", 00:31:15.129 "superblock": true, 00:31:15.129 "num_base_bdevs": 3, 00:31:15.129 "num_base_bdevs_discovered": 3, 00:31:15.129 "num_base_bdevs_operational": 3, 00:31:15.129 "base_bdevs_list": [ 00:31:15.129 { 00:31:15.129 "name": "pt1", 00:31:15.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:15.129 "is_configured": true, 00:31:15.129 "data_offset": 2048, 00:31:15.129 "data_size": 63488 00:31:15.129 }, 00:31:15.129 { 00:31:15.129 "name": "pt2", 00:31:15.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:15.129 "is_configured": true, 00:31:15.129 "data_offset": 2048, 00:31:15.129 "data_size": 63488 00:31:15.129 }, 00:31:15.129 { 00:31:15.129 "name": "pt3", 00:31:15.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:15.129 "is_configured": true, 00:31:15.129 "data_offset": 2048, 00:31:15.129 "data_size": 63488 00:31:15.129 } 00:31:15.129 ] 00:31:15.129 }' 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:15.129 14:00:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.697 [2024-10-09 14:00:22.055764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.697 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.697 "name": "raid_bdev1", 00:31:15.697 "aliases": [ 00:31:15.697 "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f" 00:31:15.697 ], 00:31:15.697 "product_name": "Raid Volume", 00:31:15.697 "block_size": 512, 00:31:15.697 "num_blocks": 190464, 00:31:15.697 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:15.697 "assigned_rate_limits": { 00:31:15.697 "rw_ios_per_sec": 0, 00:31:15.697 "rw_mbytes_per_sec": 0, 00:31:15.697 "r_mbytes_per_sec": 0, 00:31:15.697 "w_mbytes_per_sec": 0 00:31:15.697 }, 00:31:15.697 "claimed": false, 00:31:15.697 "zoned": false, 00:31:15.697 "supported_io_types": { 00:31:15.697 "read": true, 00:31:15.697 "write": true, 00:31:15.697 "unmap": true, 00:31:15.697 "flush": true, 00:31:15.697 "reset": true, 00:31:15.697 "nvme_admin": false, 00:31:15.697 "nvme_io": false, 00:31:15.697 "nvme_io_md": false, 00:31:15.697 "write_zeroes": true, 00:31:15.697 "zcopy": false, 00:31:15.697 "get_zone_info": false, 00:31:15.697 "zone_management": false, 00:31:15.697 "zone_append": false, 00:31:15.697 "compare": false, 00:31:15.697 "compare_and_write": false, 00:31:15.697 "abort": false, 00:31:15.697 "seek_hole": false, 00:31:15.698 "seek_data": false, 00:31:15.698 "copy": false, 00:31:15.698 "nvme_iov_md": false 00:31:15.698 }, 00:31:15.698 "memory_domains": [ 00:31:15.698 { 00:31:15.698 "dma_device_id": "system", 00:31:15.698 "dma_device_type": 1 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.698 "dma_device_type": 2 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "dma_device_id": "system", 00:31:15.698 "dma_device_type": 1 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.698 "dma_device_type": 2 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "dma_device_id": "system", 00:31:15.698 "dma_device_type": 1 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.698 "dma_device_type": 2 00:31:15.698 } 00:31:15.698 ], 00:31:15.698 "driver_specific": { 00:31:15.698 "raid": { 00:31:15.698 "uuid": "bebbc7c5-c7db-47c9-baf1-9c8cad6e343f", 00:31:15.698 "strip_size_kb": 64, 00:31:15.698 "state": "online", 00:31:15.698 "raid_level": "concat", 00:31:15.698 "superblock": true, 00:31:15.698 "num_base_bdevs": 3, 00:31:15.698 "num_base_bdevs_discovered": 3, 00:31:15.698 "num_base_bdevs_operational": 3, 00:31:15.698 "base_bdevs_list": [ 00:31:15.698 { 00:31:15.698 "name": "pt1", 00:31:15.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:15.698 "is_configured": true, 00:31:15.698 "data_offset": 2048, 00:31:15.698 "data_size": 63488 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "name": "pt2", 00:31:15.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:15.698 "is_configured": true, 00:31:15.698 "data_offset": 2048, 00:31:15.698 "data_size": 63488 00:31:15.698 }, 00:31:15.698 { 00:31:15.698 "name": "pt3", 00:31:15.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:15.698 "is_configured": true, 00:31:15.698 "data_offset": 2048, 00:31:15.698 "data_size": 63488 00:31:15.698 } 00:31:15.698 ] 00:31:15.698 } 00:31:15.698 } 00:31:15.698 }' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:15.698 pt2 00:31:15.698 pt3' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.698 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.957 [2024-10-09 14:00:22.331816] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bebbc7c5-c7db-47c9-baf1-9c8cad6e343f '!=' bebbc7c5-c7db-47c9-baf1-9c8cad6e343f ']' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78290 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78290 ']' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78290 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78290 00:31:15.957 killing process with pid 78290 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78290' 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78290 00:31:15.957 [2024-10-09 14:00:22.410471] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:15.957 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78290 00:31:15.957 [2024-10-09 14:00:22.410570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:15.957 [2024-10-09 14:00:22.410659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:15.957 [2024-10-09 14:00:22.410671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:31:15.957 [2024-10-09 14:00:22.448371] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:16.216 14:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:16.216 00:31:16.216 real 0m4.141s 00:31:16.216 user 0m6.540s 00:31:16.216 sys 0m0.958s 00:31:16.216 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:16.216 14:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.216 ************************************ 00:31:16.216 END TEST raid_superblock_test 00:31:16.216 ************************************ 00:31:16.216 14:00:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:31:16.216 14:00:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:16.216 14:00:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:16.216 14:00:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:16.475 ************************************ 00:31:16.475 START TEST raid_read_error_test 00:31:16.475 ************************************ 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GbnXVAl2YC 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78532 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78532 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78532 ']' 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:16.475 14:00:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:16.475 [2024-10-09 14:00:22.897309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:16.475 [2024-10-09 14:00:22.897499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78532 ] 00:31:16.734 [2024-10-09 14:00:23.075796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.734 [2024-10-09 14:00:23.122014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.735 [2024-10-09 14:00:23.166628] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:16.735 [2024-10-09 14:00:23.166661] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.302 BaseBdev1_malloc 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.302 true 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.302 [2024-10-09 14:00:23.775881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:17.302 [2024-10-09 14:00:23.775946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.302 [2024-10-09 14:00:23.775984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:17.302 [2024-10-09 14:00:23.775998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.302 [2024-10-09 14:00:23.778880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.302 [2024-10-09 14:00:23.778923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:17.302 BaseBdev1 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:17.302 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 BaseBdev2_malloc 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 true 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 [2024-10-09 14:00:23.817264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:17.303 [2024-10-09 14:00:23.817336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.303 [2024-10-09 14:00:23.817362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:17.303 [2024-10-09 14:00:23.817376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.303 [2024-10-09 14:00:23.820081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.303 [2024-10-09 14:00:23.820133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:17.303 BaseBdev2 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 BaseBdev3_malloc 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 true 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.303 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.303 [2024-10-09 14:00:23.851024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:17.562 [2024-10-09 14:00:23.851221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.562 [2024-10-09 14:00:23.851259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:17.562 [2024-10-09 14:00:23.851273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.562 [2024-10-09 14:00:23.853992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.562 [2024-10-09 14:00:23.854034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:17.562 BaseBdev3 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 [2024-10-09 14:00:23.859078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:17.562 [2024-10-09 14:00:23.861411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:17.562 [2024-10-09 14:00:23.861646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:17.562 [2024-10-09 14:00:23.861880] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:17.562 [2024-10-09 14:00:23.861902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:17.562 [2024-10-09 14:00:23.862238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:17.562 [2024-10-09 14:00:23.862414] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:17.562 [2024-10-09 14:00:23.862429] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:31:17.562 [2024-10-09 14:00:23.862599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:17.562 "name": "raid_bdev1", 00:31:17.562 "uuid": "15cebf95-bc78-440c-8353-b779d78cc908", 00:31:17.562 "strip_size_kb": 64, 00:31:17.562 "state": "online", 00:31:17.562 "raid_level": "concat", 00:31:17.562 "superblock": true, 00:31:17.562 "num_base_bdevs": 3, 00:31:17.562 "num_base_bdevs_discovered": 3, 00:31:17.562 "num_base_bdevs_operational": 3, 00:31:17.562 "base_bdevs_list": [ 00:31:17.562 { 00:31:17.562 "name": "BaseBdev1", 00:31:17.562 "uuid": "00d0b783-82f2-566c-a17a-b75d35cb531b", 00:31:17.562 "is_configured": true, 00:31:17.562 "data_offset": 2048, 00:31:17.562 "data_size": 63488 00:31:17.562 }, 00:31:17.562 { 00:31:17.562 "name": "BaseBdev2", 00:31:17.562 "uuid": "25735818-0e8d-5348-98f1-37423608be49", 00:31:17.562 "is_configured": true, 00:31:17.562 "data_offset": 2048, 00:31:17.562 "data_size": 63488 00:31:17.562 }, 00:31:17.562 { 00:31:17.562 "name": "BaseBdev3", 00:31:17.562 "uuid": "a7ec6a1c-6eba-541b-9210-5df49f0f8406", 00:31:17.562 "is_configured": true, 00:31:17.562 "data_offset": 2048, 00:31:17.562 "data_size": 63488 00:31:17.562 } 00:31:17.562 ] 00:31:17.562 }' 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:17.562 14:00:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.820 14:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:17.820 14:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:18.080 [2024-10-09 14:00:24.479710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.016 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:19.017 "name": "raid_bdev1", 00:31:19.017 "uuid": "15cebf95-bc78-440c-8353-b779d78cc908", 00:31:19.017 "strip_size_kb": 64, 00:31:19.017 "state": "online", 00:31:19.017 "raid_level": "concat", 00:31:19.017 "superblock": true, 00:31:19.017 "num_base_bdevs": 3, 00:31:19.017 "num_base_bdevs_discovered": 3, 00:31:19.017 "num_base_bdevs_operational": 3, 00:31:19.017 "base_bdevs_list": [ 00:31:19.017 { 00:31:19.017 "name": "BaseBdev1", 00:31:19.017 "uuid": "00d0b783-82f2-566c-a17a-b75d35cb531b", 00:31:19.017 "is_configured": true, 00:31:19.017 "data_offset": 2048, 00:31:19.017 "data_size": 63488 00:31:19.017 }, 00:31:19.017 { 00:31:19.017 "name": "BaseBdev2", 00:31:19.017 "uuid": "25735818-0e8d-5348-98f1-37423608be49", 00:31:19.017 "is_configured": true, 00:31:19.017 "data_offset": 2048, 00:31:19.017 "data_size": 63488 00:31:19.017 }, 00:31:19.017 { 00:31:19.017 "name": "BaseBdev3", 00:31:19.017 "uuid": "a7ec6a1c-6eba-541b-9210-5df49f0f8406", 00:31:19.017 "is_configured": true, 00:31:19.017 "data_offset": 2048, 00:31:19.017 "data_size": 63488 00:31:19.017 } 00:31:19.017 ] 00:31:19.017 }' 00:31:19.017 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:19.017 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.275 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:19.275 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:19.275 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.275 [2024-10-09 14:00:25.718069] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:19.276 [2024-10-09 14:00:25.718245] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:19.276 [2024-10-09 14:00:25.720852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:19.276 [2024-10-09 14:00:25.720903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.276 [2024-10-09 14:00:25.720939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:19.276 [2024-10-09 14:00:25.720955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:31:19.276 { 00:31:19.276 "results": [ 00:31:19.276 { 00:31:19.276 "job": "raid_bdev1", 00:31:19.276 "core_mask": "0x1", 00:31:19.276 "workload": "randrw", 00:31:19.276 "percentage": 50, 00:31:19.276 "status": "finished", 00:31:19.276 "queue_depth": 1, 00:31:19.276 "io_size": 131072, 00:31:19.276 "runtime": 1.235837, 00:31:19.276 "iops": 15241.492203259815, 00:31:19.276 "mibps": 1905.186525407477, 00:31:19.276 "io_failed": 1, 00:31:19.276 "io_timeout": 0, 00:31:19.276 "avg_latency_us": 90.5980568132121, 00:31:19.276 "min_latency_us": 27.184761904761906, 00:31:19.276 "max_latency_us": 1466.7580952380952 00:31:19.276 } 00:31:19.276 ], 00:31:19.276 "core_count": 1 00:31:19.276 } 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78532 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78532 ']' 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78532 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78532 00:31:19.276 killing process with pid 78532 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78532' 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78532 00:31:19.276 [2024-10-09 14:00:25.765342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:19.276 14:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78532 00:31:19.276 [2024-10-09 14:00:25.791482] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GbnXVAl2YC 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.81 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.81 != \0\.\0\0 ]] 00:31:19.542 00:31:19.542 real 0m3.279s 00:31:19.542 user 0m4.123s 00:31:19.542 sys 0m0.613s 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.542 14:00:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.542 ************************************ 00:31:19.542 END TEST raid_read_error_test 00:31:19.542 ************************************ 00:31:19.802 14:00:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:31:19.802 14:00:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:19.802 14:00:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.802 14:00:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:19.802 ************************************ 00:31:19.802 START TEST raid_write_error_test 00:31:19.802 ************************************ 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:19.802 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l8oVEHf6dC 00:31:19.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78661 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78661 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78661 ']' 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:19.803 14:00:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.803 [2024-10-09 14:00:26.243992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:19.803 [2024-10-09 14:00:26.244206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78661 ] 00:31:20.061 [2024-10-09 14:00:26.426389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.061 [2024-10-09 14:00:26.477237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.061 [2024-10-09 14:00:26.521999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:20.061 [2024-10-09 14:00:26.522037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.629 BaseBdev1_malloc 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.629 true 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.629 [2024-10-09 14:00:27.171610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:20.629 [2024-10-09 14:00:27.171670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.629 [2024-10-09 14:00:27.171697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:20.629 [2024-10-09 14:00:27.171710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.629 [2024-10-09 14:00:27.174675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.629 [2024-10-09 14:00:27.174891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:20.629 BaseBdev1 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.629 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 BaseBdev2_malloc 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 true 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 [2024-10-09 14:00:27.225767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:20.888 [2024-10-09 14:00:27.225830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.888 [2024-10-09 14:00:27.225857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:20.888 [2024-10-09 14:00:27.225871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.888 [2024-10-09 14:00:27.228781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.888 [2024-10-09 14:00:27.228826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:20.888 BaseBdev2 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 BaseBdev3_malloc 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 true 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 [2024-10-09 14:00:27.267469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:20.888 [2024-10-09 14:00:27.267699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.888 [2024-10-09 14:00:27.267740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:20.888 [2024-10-09 14:00:27.267753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.888 [2024-10-09 14:00:27.270495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.888 [2024-10-09 14:00:27.270539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:20.888 BaseBdev3 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 [2024-10-09 14:00:27.279491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:20.888 [2024-10-09 14:00:27.281798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:20.888 [2024-10-09 14:00:27.282054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:20.888 [2024-10-09 14:00:27.282285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:20.888 [2024-10-09 14:00:27.282315] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:31:20.888 [2024-10-09 14:00:27.282671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:20.888 [2024-10-09 14:00:27.282829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:20.888 [2024-10-09 14:00:27.282840] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:31:20.888 [2024-10-09 14:00:27.283014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:20.888 "name": "raid_bdev1", 00:31:20.888 "uuid": "556192a1-2e6c-4fc7-bec9-4529aa948f4c", 00:31:20.888 "strip_size_kb": 64, 00:31:20.888 "state": "online", 00:31:20.888 "raid_level": "concat", 00:31:20.888 "superblock": true, 00:31:20.888 "num_base_bdevs": 3, 00:31:20.888 "num_base_bdevs_discovered": 3, 00:31:20.888 "num_base_bdevs_operational": 3, 00:31:20.888 "base_bdevs_list": [ 00:31:20.888 { 00:31:20.888 "name": "BaseBdev1", 00:31:20.888 "uuid": "9d52cd81-2b84-5b99-8f9d-bffae3779706", 00:31:20.888 "is_configured": true, 00:31:20.888 "data_offset": 2048, 00:31:20.888 "data_size": 63488 00:31:20.888 }, 00:31:20.888 { 00:31:20.888 "name": "BaseBdev2", 00:31:20.888 "uuid": "9bab13ea-ce4e-5452-9e19-e6eeda1b492e", 00:31:20.888 "is_configured": true, 00:31:20.888 "data_offset": 2048, 00:31:20.888 "data_size": 63488 00:31:20.888 }, 00:31:20.888 { 00:31:20.888 "name": "BaseBdev3", 00:31:20.888 "uuid": "4a2b76b5-87ba-5967-bea0-ff4fb09613fb", 00:31:20.888 "is_configured": true, 00:31:20.888 "data_offset": 2048, 00:31:20.888 "data_size": 63488 00:31:20.888 } 00:31:20.888 ] 00:31:20.888 }' 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:20.888 14:00:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:21.455 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:21.455 14:00:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:21.455 [2024-10-09 14:00:27.844045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.395 "name": "raid_bdev1", 00:31:22.395 "uuid": "556192a1-2e6c-4fc7-bec9-4529aa948f4c", 00:31:22.395 "strip_size_kb": 64, 00:31:22.395 "state": "online", 00:31:22.395 "raid_level": "concat", 00:31:22.395 "superblock": true, 00:31:22.395 "num_base_bdevs": 3, 00:31:22.395 "num_base_bdevs_discovered": 3, 00:31:22.395 "num_base_bdevs_operational": 3, 00:31:22.395 "base_bdevs_list": [ 00:31:22.395 { 00:31:22.395 "name": "BaseBdev1", 00:31:22.395 "uuid": "9d52cd81-2b84-5b99-8f9d-bffae3779706", 00:31:22.395 "is_configured": true, 00:31:22.395 "data_offset": 2048, 00:31:22.395 "data_size": 63488 00:31:22.395 }, 00:31:22.395 { 00:31:22.395 "name": "BaseBdev2", 00:31:22.395 "uuid": "9bab13ea-ce4e-5452-9e19-e6eeda1b492e", 00:31:22.395 "is_configured": true, 00:31:22.395 "data_offset": 2048, 00:31:22.395 "data_size": 63488 00:31:22.395 }, 00:31:22.395 { 00:31:22.395 "name": "BaseBdev3", 00:31:22.395 "uuid": "4a2b76b5-87ba-5967-bea0-ff4fb09613fb", 00:31:22.395 "is_configured": true, 00:31:22.395 "data_offset": 2048, 00:31:22.395 "data_size": 63488 00:31:22.395 } 00:31:22.395 ] 00:31:22.395 }' 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.395 14:00:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.653 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:22.653 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.653 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.912 [2024-10-09 14:00:29.203617] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:22.912 [2024-10-09 14:00:29.203822] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:22.912 [2024-10-09 14:00:29.206810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:22.912 { 00:31:22.912 "results": [ 00:31:22.912 { 00:31:22.912 "job": "raid_bdev1", 00:31:22.912 "core_mask": "0x1", 00:31:22.912 "workload": "randrw", 00:31:22.912 "percentage": 50, 00:31:22.912 "status": "finished", 00:31:22.912 "queue_depth": 1, 00:31:22.912 "io_size": 131072, 00:31:22.912 "runtime": 1.357002, 00:31:22.912 "iops": 15022.085450131983, 00:31:22.912 "mibps": 1877.7606812664978, 00:31:22.912 "io_failed": 1, 00:31:22.912 "io_timeout": 0, 00:31:22.912 "avg_latency_us": 91.90508593666054, 00:31:22.912 "min_latency_us": 27.30666666666667, 00:31:22.912 "max_latency_us": 1583.7866666666666 00:31:22.912 } 00:31:22.912 ], 00:31:22.912 "core_count": 1 00:31:22.912 } 00:31:22.912 [2024-10-09 14:00:29.207013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.912 [2024-10-09 14:00:29.207066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:22.912 [2024-10-09 14:00:29.207083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78661 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78661 ']' 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78661 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78661 00:31:22.912 killing process with pid 78661 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78661' 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78661 00:31:22.912 [2024-10-09 14:00:29.251387] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:22.912 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78661 00:31:22.912 [2024-10-09 14:00:29.278605] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l8oVEHf6dC 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:31:23.170 00:31:23.170 real 0m3.426s 00:31:23.170 user 0m4.372s 00:31:23.170 sys 0m0.606s 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.170 14:00:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.170 ************************************ 00:31:23.170 END TEST raid_write_error_test 00:31:23.170 ************************************ 00:31:23.170 14:00:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:31:23.170 14:00:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:31:23.170 14:00:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:23.170 14:00:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.170 14:00:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:23.170 ************************************ 00:31:23.170 START TEST raid_state_function_test 00:31:23.171 ************************************ 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78788 00:31:23.171 Process raid pid: 78788 00:31:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78788' 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78788 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78788 ']' 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.171 14:00:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.429 [2024-10-09 14:00:29.729016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:23.429 [2024-10-09 14:00:29.730127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.429 [2024-10-09 14:00:29.918673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.688 [2024-10-09 14:00:29.982700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.688 [2024-10-09 14:00:30.037389] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:23.688 [2024-10-09 14:00:30.037453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.256 [2024-10-09 14:00:30.731712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:24.256 [2024-10-09 14:00:30.731770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:24.256 [2024-10-09 14:00:30.731787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:24.256 [2024-10-09 14:00:30.731802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:24.256 [2024-10-09 14:00:30.731811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:24.256 [2024-10-09 14:00:30.731828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.256 "name": "Existed_Raid", 00:31:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.256 "strip_size_kb": 0, 00:31:24.256 "state": "configuring", 00:31:24.256 "raid_level": "raid1", 00:31:24.256 "superblock": false, 00:31:24.256 "num_base_bdevs": 3, 00:31:24.256 "num_base_bdevs_discovered": 0, 00:31:24.256 "num_base_bdevs_operational": 3, 00:31:24.256 "base_bdevs_list": [ 00:31:24.256 { 00:31:24.256 "name": "BaseBdev1", 00:31:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.256 "is_configured": false, 00:31:24.256 "data_offset": 0, 00:31:24.256 "data_size": 0 00:31:24.256 }, 00:31:24.256 { 00:31:24.256 "name": "BaseBdev2", 00:31:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.256 "is_configured": false, 00:31:24.256 "data_offset": 0, 00:31:24.256 "data_size": 0 00:31:24.256 }, 00:31:24.256 { 00:31:24.256 "name": "BaseBdev3", 00:31:24.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.256 "is_configured": false, 00:31:24.256 "data_offset": 0, 00:31:24.256 "data_size": 0 00:31:24.256 } 00:31:24.256 ] 00:31:24.256 }' 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.256 14:00:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 [2024-10-09 14:00:31.247768] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:24.824 [2024-10-09 14:00:31.247818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 [2024-10-09 14:00:31.259813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:24.824 [2024-10-09 14:00:31.259981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:24.824 [2024-10-09 14:00:31.260073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:24.824 [2024-10-09 14:00:31.260122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:24.824 [2024-10-09 14:00:31.260303] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:24.824 [2024-10-09 14:00:31.260355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 [2024-10-09 14:00:31.278077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:24.824 BaseBdev1 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.824 [ 00:31:24.824 { 00:31:24.824 "name": "BaseBdev1", 00:31:24.824 "aliases": [ 00:31:24.824 "de75ef26-d65b-4e80-b43a-ea5c2df3d037" 00:31:24.824 ], 00:31:24.824 "product_name": "Malloc disk", 00:31:24.824 "block_size": 512, 00:31:24.824 "num_blocks": 65536, 00:31:24.824 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:24.824 "assigned_rate_limits": { 00:31:24.824 "rw_ios_per_sec": 0, 00:31:24.824 "rw_mbytes_per_sec": 0, 00:31:24.824 "r_mbytes_per_sec": 0, 00:31:24.824 "w_mbytes_per_sec": 0 00:31:24.824 }, 00:31:24.824 "claimed": true, 00:31:24.824 "claim_type": "exclusive_write", 00:31:24.824 "zoned": false, 00:31:24.824 "supported_io_types": { 00:31:24.824 "read": true, 00:31:24.824 "write": true, 00:31:24.824 "unmap": true, 00:31:24.824 "flush": true, 00:31:24.824 "reset": true, 00:31:24.824 "nvme_admin": false, 00:31:24.824 "nvme_io": false, 00:31:24.824 "nvme_io_md": false, 00:31:24.824 "write_zeroes": true, 00:31:24.824 "zcopy": true, 00:31:24.824 "get_zone_info": false, 00:31:24.824 "zone_management": false, 00:31:24.824 "zone_append": false, 00:31:24.824 "compare": false, 00:31:24.824 "compare_and_write": false, 00:31:24.824 "abort": true, 00:31:24.824 "seek_hole": false, 00:31:24.824 "seek_data": false, 00:31:24.824 "copy": true, 00:31:24.824 "nvme_iov_md": false 00:31:24.824 }, 00:31:24.824 "memory_domains": [ 00:31:24.824 { 00:31:24.824 "dma_device_id": "system", 00:31:24.824 "dma_device_type": 1 00:31:24.824 }, 00:31:24.824 { 00:31:24.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.824 "dma_device_type": 2 00:31:24.824 } 00:31:24.824 ], 00:31:24.824 "driver_specific": {} 00:31:24.824 } 00:31:24.824 ] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:24.824 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:24.825 "name": "Existed_Raid", 00:31:24.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.825 "strip_size_kb": 0, 00:31:24.825 "state": "configuring", 00:31:24.825 "raid_level": "raid1", 00:31:24.825 "superblock": false, 00:31:24.825 "num_base_bdevs": 3, 00:31:24.825 "num_base_bdevs_discovered": 1, 00:31:24.825 "num_base_bdevs_operational": 3, 00:31:24.825 "base_bdevs_list": [ 00:31:24.825 { 00:31:24.825 "name": "BaseBdev1", 00:31:24.825 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:24.825 "is_configured": true, 00:31:24.825 "data_offset": 0, 00:31:24.825 "data_size": 65536 00:31:24.825 }, 00:31:24.825 { 00:31:24.825 "name": "BaseBdev2", 00:31:24.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.825 "is_configured": false, 00:31:24.825 "data_offset": 0, 00:31:24.825 "data_size": 0 00:31:24.825 }, 00:31:24.825 { 00:31:24.825 "name": "BaseBdev3", 00:31:24.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.825 "is_configured": false, 00:31:24.825 "data_offset": 0, 00:31:24.825 "data_size": 0 00:31:24.825 } 00:31:24.825 ] 00:31:24.825 }' 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:24.825 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.394 [2024-10-09 14:00:31.782242] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:25.394 [2024-10-09 14:00:31.782302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.394 [2024-10-09 14:00:31.790264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:25.394 [2024-10-09 14:00:31.792516] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:25.394 [2024-10-09 14:00:31.792578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:25.394 [2024-10-09 14:00:31.792590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:25.394 [2024-10-09 14:00:31.792619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.394 "name": "Existed_Raid", 00:31:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.394 "strip_size_kb": 0, 00:31:25.394 "state": "configuring", 00:31:25.394 "raid_level": "raid1", 00:31:25.394 "superblock": false, 00:31:25.394 "num_base_bdevs": 3, 00:31:25.394 "num_base_bdevs_discovered": 1, 00:31:25.394 "num_base_bdevs_operational": 3, 00:31:25.394 "base_bdevs_list": [ 00:31:25.394 { 00:31:25.394 "name": "BaseBdev1", 00:31:25.394 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:25.394 "is_configured": true, 00:31:25.394 "data_offset": 0, 00:31:25.394 "data_size": 65536 00:31:25.394 }, 00:31:25.394 { 00:31:25.394 "name": "BaseBdev2", 00:31:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.394 "is_configured": false, 00:31:25.394 "data_offset": 0, 00:31:25.394 "data_size": 0 00:31:25.394 }, 00:31:25.394 { 00:31:25.394 "name": "BaseBdev3", 00:31:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.394 "is_configured": false, 00:31:25.394 "data_offset": 0, 00:31:25.394 "data_size": 0 00:31:25.394 } 00:31:25.394 ] 00:31:25.394 }' 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.394 14:00:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.963 [2024-10-09 14:00:32.279209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:25.963 BaseBdev2 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.963 [ 00:31:25.963 { 00:31:25.963 "name": "BaseBdev2", 00:31:25.963 "aliases": [ 00:31:25.963 "c49759b5-7d13-4cc9-96f4-55059ffc3a84" 00:31:25.963 ], 00:31:25.963 "product_name": "Malloc disk", 00:31:25.963 "block_size": 512, 00:31:25.963 "num_blocks": 65536, 00:31:25.963 "uuid": "c49759b5-7d13-4cc9-96f4-55059ffc3a84", 00:31:25.963 "assigned_rate_limits": { 00:31:25.963 "rw_ios_per_sec": 0, 00:31:25.963 "rw_mbytes_per_sec": 0, 00:31:25.963 "r_mbytes_per_sec": 0, 00:31:25.963 "w_mbytes_per_sec": 0 00:31:25.963 }, 00:31:25.963 "claimed": true, 00:31:25.963 "claim_type": "exclusive_write", 00:31:25.963 "zoned": false, 00:31:25.963 "supported_io_types": { 00:31:25.963 "read": true, 00:31:25.963 "write": true, 00:31:25.963 "unmap": true, 00:31:25.963 "flush": true, 00:31:25.963 "reset": true, 00:31:25.963 "nvme_admin": false, 00:31:25.963 "nvme_io": false, 00:31:25.963 "nvme_io_md": false, 00:31:25.963 "write_zeroes": true, 00:31:25.963 "zcopy": true, 00:31:25.963 "get_zone_info": false, 00:31:25.963 "zone_management": false, 00:31:25.963 "zone_append": false, 00:31:25.963 "compare": false, 00:31:25.963 "compare_and_write": false, 00:31:25.963 "abort": true, 00:31:25.963 "seek_hole": false, 00:31:25.963 "seek_data": false, 00:31:25.963 "copy": true, 00:31:25.963 "nvme_iov_md": false 00:31:25.963 }, 00:31:25.963 "memory_domains": [ 00:31:25.963 { 00:31:25.963 "dma_device_id": "system", 00:31:25.963 "dma_device_type": 1 00:31:25.963 }, 00:31:25.963 { 00:31:25.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.963 "dma_device_type": 2 00:31:25.963 } 00:31:25.963 ], 00:31:25.963 "driver_specific": {} 00:31:25.963 } 00:31:25.963 ] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:25.963 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:25.964 "name": "Existed_Raid", 00:31:25.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.964 "strip_size_kb": 0, 00:31:25.964 "state": "configuring", 00:31:25.964 "raid_level": "raid1", 00:31:25.964 "superblock": false, 00:31:25.964 "num_base_bdevs": 3, 00:31:25.964 "num_base_bdevs_discovered": 2, 00:31:25.964 "num_base_bdevs_operational": 3, 00:31:25.964 "base_bdevs_list": [ 00:31:25.964 { 00:31:25.964 "name": "BaseBdev1", 00:31:25.964 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:25.964 "is_configured": true, 00:31:25.964 "data_offset": 0, 00:31:25.964 "data_size": 65536 00:31:25.964 }, 00:31:25.964 { 00:31:25.964 "name": "BaseBdev2", 00:31:25.964 "uuid": "c49759b5-7d13-4cc9-96f4-55059ffc3a84", 00:31:25.964 "is_configured": true, 00:31:25.964 "data_offset": 0, 00:31:25.964 "data_size": 65536 00:31:25.964 }, 00:31:25.964 { 00:31:25.964 "name": "BaseBdev3", 00:31:25.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.964 "is_configured": false, 00:31:25.964 "data_offset": 0, 00:31:25.964 "data_size": 0 00:31:25.964 } 00:31:25.964 ] 00:31:25.964 }' 00:31:25.964 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:25.964 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.532 [2024-10-09 14:00:32.818807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:26.532 [2024-10-09 14:00:32.819098] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:26.532 [2024-10-09 14:00:32.819127] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:26.532 [2024-10-09 14:00:32.819510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:26.532 [2024-10-09 14:00:32.819690] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:26.532 [2024-10-09 14:00:32.819708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:31:26.532 [2024-10-09 14:00:32.819921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:26.532 BaseBdev3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.532 [ 00:31:26.532 { 00:31:26.532 "name": "BaseBdev3", 00:31:26.532 "aliases": [ 00:31:26.532 "ee7e3035-986b-4f2d-b981-657cc088d171" 00:31:26.532 ], 00:31:26.532 "product_name": "Malloc disk", 00:31:26.532 "block_size": 512, 00:31:26.532 "num_blocks": 65536, 00:31:26.532 "uuid": "ee7e3035-986b-4f2d-b981-657cc088d171", 00:31:26.532 "assigned_rate_limits": { 00:31:26.532 "rw_ios_per_sec": 0, 00:31:26.532 "rw_mbytes_per_sec": 0, 00:31:26.532 "r_mbytes_per_sec": 0, 00:31:26.532 "w_mbytes_per_sec": 0 00:31:26.532 }, 00:31:26.532 "claimed": true, 00:31:26.532 "claim_type": "exclusive_write", 00:31:26.532 "zoned": false, 00:31:26.532 "supported_io_types": { 00:31:26.532 "read": true, 00:31:26.532 "write": true, 00:31:26.532 "unmap": true, 00:31:26.532 "flush": true, 00:31:26.532 "reset": true, 00:31:26.532 "nvme_admin": false, 00:31:26.532 "nvme_io": false, 00:31:26.532 "nvme_io_md": false, 00:31:26.532 "write_zeroes": true, 00:31:26.532 "zcopy": true, 00:31:26.532 "get_zone_info": false, 00:31:26.532 "zone_management": false, 00:31:26.532 "zone_append": false, 00:31:26.532 "compare": false, 00:31:26.532 "compare_and_write": false, 00:31:26.532 "abort": true, 00:31:26.532 "seek_hole": false, 00:31:26.532 "seek_data": false, 00:31:26.532 "copy": true, 00:31:26.532 "nvme_iov_md": false 00:31:26.532 }, 00:31:26.532 "memory_domains": [ 00:31:26.532 { 00:31:26.532 "dma_device_id": "system", 00:31:26.532 "dma_device_type": 1 00:31:26.532 }, 00:31:26.532 { 00:31:26.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:26.532 "dma_device_type": 2 00:31:26.532 } 00:31:26.532 ], 00:31:26.532 "driver_specific": {} 00:31:26.532 } 00:31:26.532 ] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.532 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:26.532 "name": "Existed_Raid", 00:31:26.532 "uuid": "67dfc6d5-c0ab-49e8-9f7a-aa26f229bc93", 00:31:26.532 "strip_size_kb": 0, 00:31:26.532 "state": "online", 00:31:26.532 "raid_level": "raid1", 00:31:26.532 "superblock": false, 00:31:26.532 "num_base_bdevs": 3, 00:31:26.533 "num_base_bdevs_discovered": 3, 00:31:26.533 "num_base_bdevs_operational": 3, 00:31:26.533 "base_bdevs_list": [ 00:31:26.533 { 00:31:26.533 "name": "BaseBdev1", 00:31:26.533 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:26.533 "is_configured": true, 00:31:26.533 "data_offset": 0, 00:31:26.533 "data_size": 65536 00:31:26.533 }, 00:31:26.533 { 00:31:26.533 "name": "BaseBdev2", 00:31:26.533 "uuid": "c49759b5-7d13-4cc9-96f4-55059ffc3a84", 00:31:26.533 "is_configured": true, 00:31:26.533 "data_offset": 0, 00:31:26.533 "data_size": 65536 00:31:26.533 }, 00:31:26.533 { 00:31:26.533 "name": "BaseBdev3", 00:31:26.533 "uuid": "ee7e3035-986b-4f2d-b981-657cc088d171", 00:31:26.533 "is_configured": true, 00:31:26.533 "data_offset": 0, 00:31:26.533 "data_size": 65536 00:31:26.533 } 00:31:26.533 ] 00:31:26.533 }' 00:31:26.533 14:00:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:26.533 14:00:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.791 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:26.791 [2024-10-09 14:00:33.335318] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:27.050 "name": "Existed_Raid", 00:31:27.050 "aliases": [ 00:31:27.050 "67dfc6d5-c0ab-49e8-9f7a-aa26f229bc93" 00:31:27.050 ], 00:31:27.050 "product_name": "Raid Volume", 00:31:27.050 "block_size": 512, 00:31:27.050 "num_blocks": 65536, 00:31:27.050 "uuid": "67dfc6d5-c0ab-49e8-9f7a-aa26f229bc93", 00:31:27.050 "assigned_rate_limits": { 00:31:27.050 "rw_ios_per_sec": 0, 00:31:27.050 "rw_mbytes_per_sec": 0, 00:31:27.050 "r_mbytes_per_sec": 0, 00:31:27.050 "w_mbytes_per_sec": 0 00:31:27.050 }, 00:31:27.050 "claimed": false, 00:31:27.050 "zoned": false, 00:31:27.050 "supported_io_types": { 00:31:27.050 "read": true, 00:31:27.050 "write": true, 00:31:27.050 "unmap": false, 00:31:27.050 "flush": false, 00:31:27.050 "reset": true, 00:31:27.050 "nvme_admin": false, 00:31:27.050 "nvme_io": false, 00:31:27.050 "nvme_io_md": false, 00:31:27.050 "write_zeroes": true, 00:31:27.050 "zcopy": false, 00:31:27.050 "get_zone_info": false, 00:31:27.050 "zone_management": false, 00:31:27.050 "zone_append": false, 00:31:27.050 "compare": false, 00:31:27.050 "compare_and_write": false, 00:31:27.050 "abort": false, 00:31:27.050 "seek_hole": false, 00:31:27.050 "seek_data": false, 00:31:27.050 "copy": false, 00:31:27.050 "nvme_iov_md": false 00:31:27.050 }, 00:31:27.050 "memory_domains": [ 00:31:27.050 { 00:31:27.050 "dma_device_id": "system", 00:31:27.050 "dma_device_type": 1 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.050 "dma_device_type": 2 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "dma_device_id": "system", 00:31:27.050 "dma_device_type": 1 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.050 "dma_device_type": 2 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "dma_device_id": "system", 00:31:27.050 "dma_device_type": 1 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.050 "dma_device_type": 2 00:31:27.050 } 00:31:27.050 ], 00:31:27.050 "driver_specific": { 00:31:27.050 "raid": { 00:31:27.050 "uuid": "67dfc6d5-c0ab-49e8-9f7a-aa26f229bc93", 00:31:27.050 "strip_size_kb": 0, 00:31:27.050 "state": "online", 00:31:27.050 "raid_level": "raid1", 00:31:27.050 "superblock": false, 00:31:27.050 "num_base_bdevs": 3, 00:31:27.050 "num_base_bdevs_discovered": 3, 00:31:27.050 "num_base_bdevs_operational": 3, 00:31:27.050 "base_bdevs_list": [ 00:31:27.050 { 00:31:27.050 "name": "BaseBdev1", 00:31:27.050 "uuid": "de75ef26-d65b-4e80-b43a-ea5c2df3d037", 00:31:27.050 "is_configured": true, 00:31:27.050 "data_offset": 0, 00:31:27.050 "data_size": 65536 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "name": "BaseBdev2", 00:31:27.050 "uuid": "c49759b5-7d13-4cc9-96f4-55059ffc3a84", 00:31:27.050 "is_configured": true, 00:31:27.050 "data_offset": 0, 00:31:27.050 "data_size": 65536 00:31:27.050 }, 00:31:27.050 { 00:31:27.050 "name": "BaseBdev3", 00:31:27.050 "uuid": "ee7e3035-986b-4f2d-b981-657cc088d171", 00:31:27.050 "is_configured": true, 00:31:27.050 "data_offset": 0, 00:31:27.050 "data_size": 65536 00:31:27.050 } 00:31:27.050 ] 00:31:27.050 } 00:31:27.050 } 00:31:27.050 }' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:27.050 BaseBdev2 00:31:27.050 BaseBdev3' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.050 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.051 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.310 [2024-10-09 14:00:33.635144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.310 "name": "Existed_Raid", 00:31:27.310 "uuid": "67dfc6d5-c0ab-49e8-9f7a-aa26f229bc93", 00:31:27.310 "strip_size_kb": 0, 00:31:27.310 "state": "online", 00:31:27.310 "raid_level": "raid1", 00:31:27.310 "superblock": false, 00:31:27.310 "num_base_bdevs": 3, 00:31:27.310 "num_base_bdevs_discovered": 2, 00:31:27.310 "num_base_bdevs_operational": 2, 00:31:27.310 "base_bdevs_list": [ 00:31:27.310 { 00:31:27.310 "name": null, 00:31:27.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.310 "is_configured": false, 00:31:27.310 "data_offset": 0, 00:31:27.310 "data_size": 65536 00:31:27.310 }, 00:31:27.310 { 00:31:27.310 "name": "BaseBdev2", 00:31:27.310 "uuid": "c49759b5-7d13-4cc9-96f4-55059ffc3a84", 00:31:27.310 "is_configured": true, 00:31:27.310 "data_offset": 0, 00:31:27.310 "data_size": 65536 00:31:27.310 }, 00:31:27.310 { 00:31:27.310 "name": "BaseBdev3", 00:31:27.310 "uuid": "ee7e3035-986b-4f2d-b981-657cc088d171", 00:31:27.310 "is_configured": true, 00:31:27.310 "data_offset": 0, 00:31:27.310 "data_size": 65536 00:31:27.310 } 00:31:27.310 ] 00:31:27.310 }' 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.310 14:00:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:27.569 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 [2024-10-09 14:00:34.143412] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 [2024-10-09 14:00:34.211534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:27.829 [2024-10-09 14:00:34.211644] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:27.829 [2024-10-09 14:00:34.224390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:27.829 [2024-10-09 14:00:34.224663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:27.829 [2024-10-09 14:00:34.224867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.829 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.829 [ 00:31:27.829 { 00:31:27.829 "name": "BaseBdev2", 00:31:27.829 "aliases": [ 00:31:27.829 "9ea00256-a50c-4652-846e-85725eb28b0f" 00:31:27.829 ], 00:31:27.829 "product_name": "Malloc disk", 00:31:27.829 "block_size": 512, 00:31:27.829 "num_blocks": 65536, 00:31:27.829 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:27.829 "assigned_rate_limits": { 00:31:27.829 "rw_ios_per_sec": 0, 00:31:27.829 "rw_mbytes_per_sec": 0, 00:31:27.829 "r_mbytes_per_sec": 0, 00:31:27.829 "w_mbytes_per_sec": 0 00:31:27.829 }, 00:31:27.829 "claimed": false, 00:31:27.829 "zoned": false, 00:31:27.829 "supported_io_types": { 00:31:27.829 "read": true, 00:31:27.829 "write": true, 00:31:27.829 "unmap": true, 00:31:27.829 "flush": true, 00:31:27.829 "reset": true, 00:31:27.829 "nvme_admin": false, 00:31:27.829 "nvme_io": false, 00:31:27.829 "nvme_io_md": false, 00:31:27.829 "write_zeroes": true, 00:31:27.829 "zcopy": true, 00:31:27.829 "get_zone_info": false, 00:31:27.829 "zone_management": false, 00:31:27.829 "zone_append": false, 00:31:27.829 "compare": false, 00:31:27.829 "compare_and_write": false, 00:31:27.829 "abort": true, 00:31:27.829 "seek_hole": false, 00:31:27.829 "seek_data": false, 00:31:27.829 "copy": true, 00:31:27.829 "nvme_iov_md": false 00:31:27.829 }, 00:31:27.829 "memory_domains": [ 00:31:27.829 { 00:31:27.829 "dma_device_id": "system", 00:31:27.829 "dma_device_type": 1 00:31:27.829 }, 00:31:27.829 { 00:31:27.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.830 "dma_device_type": 2 00:31:27.830 } 00:31:27.830 ], 00:31:27.830 "driver_specific": {} 00:31:27.830 } 00:31:27.830 ] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.830 BaseBdev3 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.830 [ 00:31:27.830 { 00:31:27.830 "name": "BaseBdev3", 00:31:27.830 "aliases": [ 00:31:27.830 "d2b5a241-a4db-41cd-a72d-8d14198528c9" 00:31:27.830 ], 00:31:27.830 "product_name": "Malloc disk", 00:31:27.830 "block_size": 512, 00:31:27.830 "num_blocks": 65536, 00:31:27.830 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:27.830 "assigned_rate_limits": { 00:31:27.830 "rw_ios_per_sec": 0, 00:31:27.830 "rw_mbytes_per_sec": 0, 00:31:27.830 "r_mbytes_per_sec": 0, 00:31:27.830 "w_mbytes_per_sec": 0 00:31:27.830 }, 00:31:27.830 "claimed": false, 00:31:27.830 "zoned": false, 00:31:27.830 "supported_io_types": { 00:31:27.830 "read": true, 00:31:27.830 "write": true, 00:31:27.830 "unmap": true, 00:31:27.830 "flush": true, 00:31:27.830 "reset": true, 00:31:27.830 "nvme_admin": false, 00:31:27.830 "nvme_io": false, 00:31:27.830 "nvme_io_md": false, 00:31:27.830 "write_zeroes": true, 00:31:27.830 "zcopy": true, 00:31:27.830 "get_zone_info": false, 00:31:27.830 "zone_management": false, 00:31:27.830 "zone_append": false, 00:31:27.830 "compare": false, 00:31:27.830 "compare_and_write": false, 00:31:27.830 "abort": true, 00:31:27.830 "seek_hole": false, 00:31:27.830 "seek_data": false, 00:31:27.830 "copy": true, 00:31:27.830 "nvme_iov_md": false 00:31:27.830 }, 00:31:27.830 "memory_domains": [ 00:31:27.830 { 00:31:27.830 "dma_device_id": "system", 00:31:27.830 "dma_device_type": 1 00:31:27.830 }, 00:31:27.830 { 00:31:27.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:27.830 "dma_device_type": 2 00:31:27.830 } 00:31:27.830 ], 00:31:27.830 "driver_specific": {} 00:31:27.830 } 00:31:27.830 ] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:27.830 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.830 [2024-10-09 14:00:34.378150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:27.830 [2024-10-09 14:00:34.378312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:28.102 [2024-10-09 14:00:34.378389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:28.102 [2024-10-09 14:00:34.380680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.102 "name": "Existed_Raid", 00:31:28.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.102 "strip_size_kb": 0, 00:31:28.102 "state": "configuring", 00:31:28.102 "raid_level": "raid1", 00:31:28.102 "superblock": false, 00:31:28.102 "num_base_bdevs": 3, 00:31:28.102 "num_base_bdevs_discovered": 2, 00:31:28.102 "num_base_bdevs_operational": 3, 00:31:28.102 "base_bdevs_list": [ 00:31:28.102 { 00:31:28.102 "name": "BaseBdev1", 00:31:28.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.102 "is_configured": false, 00:31:28.102 "data_offset": 0, 00:31:28.102 "data_size": 0 00:31:28.102 }, 00:31:28.102 { 00:31:28.102 "name": "BaseBdev2", 00:31:28.102 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:28.102 "is_configured": true, 00:31:28.102 "data_offset": 0, 00:31:28.102 "data_size": 65536 00:31:28.102 }, 00:31:28.102 { 00:31:28.102 "name": "BaseBdev3", 00:31:28.102 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:28.102 "is_configured": true, 00:31:28.102 "data_offset": 0, 00:31:28.102 "data_size": 65536 00:31:28.102 } 00:31:28.102 ] 00:31:28.102 }' 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.102 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 [2024-10-09 14:00:34.818256] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.389 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.389 "name": "Existed_Raid", 00:31:28.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.389 "strip_size_kb": 0, 00:31:28.389 "state": "configuring", 00:31:28.389 "raid_level": "raid1", 00:31:28.389 "superblock": false, 00:31:28.389 "num_base_bdevs": 3, 00:31:28.389 "num_base_bdevs_discovered": 1, 00:31:28.389 "num_base_bdevs_operational": 3, 00:31:28.389 "base_bdevs_list": [ 00:31:28.389 { 00:31:28.389 "name": "BaseBdev1", 00:31:28.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.390 "is_configured": false, 00:31:28.390 "data_offset": 0, 00:31:28.390 "data_size": 0 00:31:28.390 }, 00:31:28.390 { 00:31:28.390 "name": null, 00:31:28.390 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:28.390 "is_configured": false, 00:31:28.390 "data_offset": 0, 00:31:28.390 "data_size": 65536 00:31:28.390 }, 00:31:28.390 { 00:31:28.390 "name": "BaseBdev3", 00:31:28.390 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:28.390 "is_configured": true, 00:31:28.390 "data_offset": 0, 00:31:28.390 "data_size": 65536 00:31:28.390 } 00:31:28.390 ] 00:31:28.390 }' 00:31:28.390 14:00:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.390 14:00:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 [2024-10-09 14:00:35.325389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:28.956 BaseBdev1 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 [ 00:31:28.956 { 00:31:28.956 "name": "BaseBdev1", 00:31:28.956 "aliases": [ 00:31:28.956 "694ad859-2fed-4926-b406-1f5ec65e1432" 00:31:28.956 ], 00:31:28.956 "product_name": "Malloc disk", 00:31:28.956 "block_size": 512, 00:31:28.956 "num_blocks": 65536, 00:31:28.956 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:28.956 "assigned_rate_limits": { 00:31:28.956 "rw_ios_per_sec": 0, 00:31:28.956 "rw_mbytes_per_sec": 0, 00:31:28.956 "r_mbytes_per_sec": 0, 00:31:28.956 "w_mbytes_per_sec": 0 00:31:28.956 }, 00:31:28.956 "claimed": true, 00:31:28.956 "claim_type": "exclusive_write", 00:31:28.956 "zoned": false, 00:31:28.956 "supported_io_types": { 00:31:28.956 "read": true, 00:31:28.956 "write": true, 00:31:28.956 "unmap": true, 00:31:28.956 "flush": true, 00:31:28.956 "reset": true, 00:31:28.956 "nvme_admin": false, 00:31:28.956 "nvme_io": false, 00:31:28.956 "nvme_io_md": false, 00:31:28.956 "write_zeroes": true, 00:31:28.956 "zcopy": true, 00:31:28.956 "get_zone_info": false, 00:31:28.956 "zone_management": false, 00:31:28.956 "zone_append": false, 00:31:28.956 "compare": false, 00:31:28.956 "compare_and_write": false, 00:31:28.956 "abort": true, 00:31:28.956 "seek_hole": false, 00:31:28.956 "seek_data": false, 00:31:28.956 "copy": true, 00:31:28.956 "nvme_iov_md": false 00:31:28.956 }, 00:31:28.956 "memory_domains": [ 00:31:28.956 { 00:31:28.956 "dma_device_id": "system", 00:31:28.956 "dma_device_type": 1 00:31:28.956 }, 00:31:28.956 { 00:31:28.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:28.956 "dma_device_type": 2 00:31:28.956 } 00:31:28.956 ], 00:31:28.956 "driver_specific": {} 00:31:28.956 } 00:31:28.956 ] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:28.956 "name": "Existed_Raid", 00:31:28.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.956 "strip_size_kb": 0, 00:31:28.956 "state": "configuring", 00:31:28.956 "raid_level": "raid1", 00:31:28.956 "superblock": false, 00:31:28.956 "num_base_bdevs": 3, 00:31:28.956 "num_base_bdevs_discovered": 2, 00:31:28.956 "num_base_bdevs_operational": 3, 00:31:28.956 "base_bdevs_list": [ 00:31:28.956 { 00:31:28.956 "name": "BaseBdev1", 00:31:28.956 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:28.956 "is_configured": true, 00:31:28.956 "data_offset": 0, 00:31:28.956 "data_size": 65536 00:31:28.956 }, 00:31:28.956 { 00:31:28.956 "name": null, 00:31:28.956 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:28.956 "is_configured": false, 00:31:28.956 "data_offset": 0, 00:31:28.956 "data_size": 65536 00:31:28.956 }, 00:31:28.956 { 00:31:28.956 "name": "BaseBdev3", 00:31:28.956 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:28.956 "is_configured": true, 00:31:28.956 "data_offset": 0, 00:31:28.956 "data_size": 65536 00:31:28.956 } 00:31:28.956 ] 00:31:28.956 }' 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:28.956 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.523 [2024-10-09 14:00:35.861547] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:29.523 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:29.523 "name": "Existed_Raid", 00:31:29.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.523 "strip_size_kb": 0, 00:31:29.523 "state": "configuring", 00:31:29.523 "raid_level": "raid1", 00:31:29.523 "superblock": false, 00:31:29.523 "num_base_bdevs": 3, 00:31:29.523 "num_base_bdevs_discovered": 1, 00:31:29.523 "num_base_bdevs_operational": 3, 00:31:29.523 "base_bdevs_list": [ 00:31:29.523 { 00:31:29.523 "name": "BaseBdev1", 00:31:29.523 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:29.523 "is_configured": true, 00:31:29.523 "data_offset": 0, 00:31:29.523 "data_size": 65536 00:31:29.523 }, 00:31:29.523 { 00:31:29.523 "name": null, 00:31:29.523 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:29.523 "is_configured": false, 00:31:29.523 "data_offset": 0, 00:31:29.523 "data_size": 65536 00:31:29.523 }, 00:31:29.523 { 00:31:29.523 "name": null, 00:31:29.523 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:29.523 "is_configured": false, 00:31:29.523 "data_offset": 0, 00:31:29.524 "data_size": 65536 00:31:29.524 } 00:31:29.524 ] 00:31:29.524 }' 00:31:29.524 14:00:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:29.524 14:00:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.781 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:29.781 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:29.781 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.781 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:29.781 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.040 [2024-10-09 14:00:36.361719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:30.040 "name": "Existed_Raid", 00:31:30.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.040 "strip_size_kb": 0, 00:31:30.040 "state": "configuring", 00:31:30.040 "raid_level": "raid1", 00:31:30.040 "superblock": false, 00:31:30.040 "num_base_bdevs": 3, 00:31:30.040 "num_base_bdevs_discovered": 2, 00:31:30.040 "num_base_bdevs_operational": 3, 00:31:30.040 "base_bdevs_list": [ 00:31:30.040 { 00:31:30.040 "name": "BaseBdev1", 00:31:30.040 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:30.040 "is_configured": true, 00:31:30.040 "data_offset": 0, 00:31:30.040 "data_size": 65536 00:31:30.040 }, 00:31:30.040 { 00:31:30.040 "name": null, 00:31:30.040 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:30.040 "is_configured": false, 00:31:30.040 "data_offset": 0, 00:31:30.040 "data_size": 65536 00:31:30.040 }, 00:31:30.040 { 00:31:30.040 "name": "BaseBdev3", 00:31:30.040 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:30.040 "is_configured": true, 00:31:30.040 "data_offset": 0, 00:31:30.040 "data_size": 65536 00:31:30.040 } 00:31:30.040 ] 00:31:30.040 }' 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:30.040 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.299 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:30.299 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.299 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.299 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.299 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.558 [2024-10-09 14:00:36.869820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:30.558 "name": "Existed_Raid", 00:31:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.558 "strip_size_kb": 0, 00:31:30.558 "state": "configuring", 00:31:30.558 "raid_level": "raid1", 00:31:30.558 "superblock": false, 00:31:30.558 "num_base_bdevs": 3, 00:31:30.558 "num_base_bdevs_discovered": 1, 00:31:30.558 "num_base_bdevs_operational": 3, 00:31:30.558 "base_bdevs_list": [ 00:31:30.558 { 00:31:30.558 "name": null, 00:31:30.558 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:30.558 "is_configured": false, 00:31:30.558 "data_offset": 0, 00:31:30.558 "data_size": 65536 00:31:30.558 }, 00:31:30.558 { 00:31:30.558 "name": null, 00:31:30.558 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:30.558 "is_configured": false, 00:31:30.558 "data_offset": 0, 00:31:30.558 "data_size": 65536 00:31:30.558 }, 00:31:30.558 { 00:31:30.558 "name": "BaseBdev3", 00:31:30.558 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:30.558 "is_configured": true, 00:31:30.558 "data_offset": 0, 00:31:30.558 "data_size": 65536 00:31:30.558 } 00:31:30.558 ] 00:31:30.558 }' 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:30.558 14:00:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:30.817 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.817 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:30.817 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.076 [2024-10-09 14:00:37.392631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.076 "name": "Existed_Raid", 00:31:31.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.076 "strip_size_kb": 0, 00:31:31.076 "state": "configuring", 00:31:31.076 "raid_level": "raid1", 00:31:31.076 "superblock": false, 00:31:31.076 "num_base_bdevs": 3, 00:31:31.076 "num_base_bdevs_discovered": 2, 00:31:31.076 "num_base_bdevs_operational": 3, 00:31:31.076 "base_bdevs_list": [ 00:31:31.076 { 00:31:31.076 "name": null, 00:31:31.076 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:31.076 "is_configured": false, 00:31:31.076 "data_offset": 0, 00:31:31.076 "data_size": 65536 00:31:31.076 }, 00:31:31.076 { 00:31:31.076 "name": "BaseBdev2", 00:31:31.076 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:31.076 "is_configured": true, 00:31:31.076 "data_offset": 0, 00:31:31.076 "data_size": 65536 00:31:31.076 }, 00:31:31.076 { 00:31:31.076 "name": "BaseBdev3", 00:31:31.076 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:31.076 "is_configured": true, 00:31:31.076 "data_offset": 0, 00:31:31.076 "data_size": 65536 00:31:31.076 } 00:31:31.076 ] 00:31:31.076 }' 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.076 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.335 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:31.335 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.335 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.335 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.335 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 694ad859-2fed-4926-b406-1f5ec65e1432 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.595 [2024-10-09 14:00:37.948025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:31.595 [2024-10-09 14:00:37.948266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:31.595 [2024-10-09 14:00:37.948287] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:31.595 [2024-10-09 14:00:37.948625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:31.595 [2024-10-09 14:00:37.948783] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:31.595 [2024-10-09 14:00:37.948801] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:31:31.595 [2024-10-09 14:00:37.949001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.595 NewBaseBdev 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.595 [ 00:31:31.595 { 00:31:31.595 "name": "NewBaseBdev", 00:31:31.595 "aliases": [ 00:31:31.595 "694ad859-2fed-4926-b406-1f5ec65e1432" 00:31:31.595 ], 00:31:31.595 "product_name": "Malloc disk", 00:31:31.595 "block_size": 512, 00:31:31.595 "num_blocks": 65536, 00:31:31.595 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:31.595 "assigned_rate_limits": { 00:31:31.595 "rw_ios_per_sec": 0, 00:31:31.595 "rw_mbytes_per_sec": 0, 00:31:31.595 "r_mbytes_per_sec": 0, 00:31:31.595 "w_mbytes_per_sec": 0 00:31:31.595 }, 00:31:31.595 "claimed": true, 00:31:31.595 "claim_type": "exclusive_write", 00:31:31.595 "zoned": false, 00:31:31.595 "supported_io_types": { 00:31:31.595 "read": true, 00:31:31.595 "write": true, 00:31:31.595 "unmap": true, 00:31:31.595 "flush": true, 00:31:31.595 "reset": true, 00:31:31.595 "nvme_admin": false, 00:31:31.595 "nvme_io": false, 00:31:31.595 "nvme_io_md": false, 00:31:31.595 "write_zeroes": true, 00:31:31.595 "zcopy": true, 00:31:31.595 "get_zone_info": false, 00:31:31.595 "zone_management": false, 00:31:31.595 "zone_append": false, 00:31:31.595 "compare": false, 00:31:31.595 "compare_and_write": false, 00:31:31.595 "abort": true, 00:31:31.595 "seek_hole": false, 00:31:31.595 "seek_data": false, 00:31:31.595 "copy": true, 00:31:31.595 "nvme_iov_md": false 00:31:31.595 }, 00:31:31.595 "memory_domains": [ 00:31:31.595 { 00:31:31.595 "dma_device_id": "system", 00:31:31.595 "dma_device_type": 1 00:31:31.595 }, 00:31:31.595 { 00:31:31.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:31.595 "dma_device_type": 2 00:31:31.595 } 00:31:31.595 ], 00:31:31.595 "driver_specific": {} 00:31:31.595 } 00:31:31.595 ] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:31.595 14:00:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.595 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:31.595 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:31.595 "name": "Existed_Raid", 00:31:31.595 "uuid": "5debdc9a-f5af-4ba6-894b-c5d41ce20c30", 00:31:31.595 "strip_size_kb": 0, 00:31:31.595 "state": "online", 00:31:31.595 "raid_level": "raid1", 00:31:31.595 "superblock": false, 00:31:31.595 "num_base_bdevs": 3, 00:31:31.595 "num_base_bdevs_discovered": 3, 00:31:31.595 "num_base_bdevs_operational": 3, 00:31:31.595 "base_bdevs_list": [ 00:31:31.595 { 00:31:31.595 "name": "NewBaseBdev", 00:31:31.595 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:31.595 "is_configured": true, 00:31:31.595 "data_offset": 0, 00:31:31.595 "data_size": 65536 00:31:31.595 }, 00:31:31.595 { 00:31:31.595 "name": "BaseBdev2", 00:31:31.595 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:31.595 "is_configured": true, 00:31:31.595 "data_offset": 0, 00:31:31.595 "data_size": 65536 00:31:31.595 }, 00:31:31.595 { 00:31:31.595 "name": "BaseBdev3", 00:31:31.595 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:31.595 "is_configured": true, 00:31:31.595 "data_offset": 0, 00:31:31.595 "data_size": 65536 00:31:31.595 } 00:31:31.595 ] 00:31:31.595 }' 00:31:31.595 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:31.595 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:32.162 [2024-10-09 14:00:38.440489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.162 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:32.162 "name": "Existed_Raid", 00:31:32.162 "aliases": [ 00:31:32.162 "5debdc9a-f5af-4ba6-894b-c5d41ce20c30" 00:31:32.162 ], 00:31:32.162 "product_name": "Raid Volume", 00:31:32.162 "block_size": 512, 00:31:32.162 "num_blocks": 65536, 00:31:32.162 "uuid": "5debdc9a-f5af-4ba6-894b-c5d41ce20c30", 00:31:32.162 "assigned_rate_limits": { 00:31:32.162 "rw_ios_per_sec": 0, 00:31:32.162 "rw_mbytes_per_sec": 0, 00:31:32.162 "r_mbytes_per_sec": 0, 00:31:32.162 "w_mbytes_per_sec": 0 00:31:32.162 }, 00:31:32.162 "claimed": false, 00:31:32.162 "zoned": false, 00:31:32.162 "supported_io_types": { 00:31:32.162 "read": true, 00:31:32.162 "write": true, 00:31:32.162 "unmap": false, 00:31:32.162 "flush": false, 00:31:32.162 "reset": true, 00:31:32.162 "nvme_admin": false, 00:31:32.162 "nvme_io": false, 00:31:32.162 "nvme_io_md": false, 00:31:32.162 "write_zeroes": true, 00:31:32.162 "zcopy": false, 00:31:32.162 "get_zone_info": false, 00:31:32.162 "zone_management": false, 00:31:32.162 "zone_append": false, 00:31:32.162 "compare": false, 00:31:32.162 "compare_and_write": false, 00:31:32.162 "abort": false, 00:31:32.162 "seek_hole": false, 00:31:32.162 "seek_data": false, 00:31:32.162 "copy": false, 00:31:32.162 "nvme_iov_md": false 00:31:32.162 }, 00:31:32.162 "memory_domains": [ 00:31:32.162 { 00:31:32.162 "dma_device_id": "system", 00:31:32.162 "dma_device_type": 1 00:31:32.162 }, 00:31:32.162 { 00:31:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.162 "dma_device_type": 2 00:31:32.162 }, 00:31:32.162 { 00:31:32.162 "dma_device_id": "system", 00:31:32.162 "dma_device_type": 1 00:31:32.162 }, 00:31:32.162 { 00:31:32.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.162 "dma_device_type": 2 00:31:32.162 }, 00:31:32.162 { 00:31:32.162 "dma_device_id": "system", 00:31:32.162 "dma_device_type": 1 00:31:32.162 }, 00:31:32.163 { 00:31:32.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:32.163 "dma_device_type": 2 00:31:32.163 } 00:31:32.163 ], 00:31:32.163 "driver_specific": { 00:31:32.163 "raid": { 00:31:32.163 "uuid": "5debdc9a-f5af-4ba6-894b-c5d41ce20c30", 00:31:32.163 "strip_size_kb": 0, 00:31:32.163 "state": "online", 00:31:32.163 "raid_level": "raid1", 00:31:32.163 "superblock": false, 00:31:32.163 "num_base_bdevs": 3, 00:31:32.163 "num_base_bdevs_discovered": 3, 00:31:32.163 "num_base_bdevs_operational": 3, 00:31:32.163 "base_bdevs_list": [ 00:31:32.163 { 00:31:32.163 "name": "NewBaseBdev", 00:31:32.163 "uuid": "694ad859-2fed-4926-b406-1f5ec65e1432", 00:31:32.163 "is_configured": true, 00:31:32.163 "data_offset": 0, 00:31:32.163 "data_size": 65536 00:31:32.163 }, 00:31:32.163 { 00:31:32.163 "name": "BaseBdev2", 00:31:32.163 "uuid": "9ea00256-a50c-4652-846e-85725eb28b0f", 00:31:32.163 "is_configured": true, 00:31:32.163 "data_offset": 0, 00:31:32.163 "data_size": 65536 00:31:32.163 }, 00:31:32.163 { 00:31:32.163 "name": "BaseBdev3", 00:31:32.163 "uuid": "d2b5a241-a4db-41cd-a72d-8d14198528c9", 00:31:32.163 "is_configured": true, 00:31:32.163 "data_offset": 0, 00:31:32.163 "data_size": 65536 00:31:32.163 } 00:31:32.163 ] 00:31:32.163 } 00:31:32.163 } 00:31:32.163 }' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:32.163 BaseBdev2 00:31:32.163 BaseBdev3' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.163 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.421 [2024-10-09 14:00:38.736250] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:32.421 [2024-10-09 14:00:38.736284] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.421 [2024-10-09 14:00:38.736359] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.421 [2024-10-09 14:00:38.736630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.421 [2024-10-09 14:00:38.736645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78788 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78788 ']' 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78788 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78788 00:31:32.421 killing process with pid 78788 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78788' 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78788 00:31:32.421 [2024-10-09 14:00:38.780408] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:32.421 14:00:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78788 00:31:32.421 [2024-10-09 14:00:38.811867] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:31:32.680 00:31:32.680 real 0m9.453s 00:31:32.680 user 0m16.325s 00:31:32.680 sys 0m1.953s 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.680 ************************************ 00:31:32.680 END TEST raid_state_function_test 00:31:32.680 ************************************ 00:31:32.680 14:00:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:31:32.680 14:00:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:32.680 14:00:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:32.680 14:00:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:32.680 ************************************ 00:31:32.680 START TEST raid_state_function_test_sb 00:31:32.680 ************************************ 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79404 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79404' 00:31:32.680 Process raid pid: 79404 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79404 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79404 ']' 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:32.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:32.680 14:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.939 [2024-10-09 14:00:39.254293] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:32.939 [2024-10-09 14:00:39.254496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.939 [2024-10-09 14:00:39.432918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.939 [2024-10-09 14:00:39.479859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.197 [2024-10-09 14:00:39.523580] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:33.197 [2024-10-09 14:00:39.523616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.762 [2024-10-09 14:00:40.150738] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:33.762 [2024-10-09 14:00:40.150951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:33.762 [2024-10-09 14:00:40.150978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:33.762 [2024-10-09 14:00:40.150995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:33.762 [2024-10-09 14:00:40.151003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:33.762 [2024-10-09 14:00:40.151019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.762 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:33.762 "name": "Existed_Raid", 00:31:33.762 "uuid": "3e22fa6d-08d9-4838-bfdf-532e13439b62", 00:31:33.762 "strip_size_kb": 0, 00:31:33.762 "state": "configuring", 00:31:33.762 "raid_level": "raid1", 00:31:33.762 "superblock": true, 00:31:33.762 "num_base_bdevs": 3, 00:31:33.762 "num_base_bdevs_discovered": 0, 00:31:33.762 "num_base_bdevs_operational": 3, 00:31:33.762 "base_bdevs_list": [ 00:31:33.762 { 00:31:33.762 "name": "BaseBdev1", 00:31:33.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.763 "is_configured": false, 00:31:33.763 "data_offset": 0, 00:31:33.763 "data_size": 0 00:31:33.763 }, 00:31:33.763 { 00:31:33.763 "name": "BaseBdev2", 00:31:33.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.763 "is_configured": false, 00:31:33.763 "data_offset": 0, 00:31:33.763 "data_size": 0 00:31:33.763 }, 00:31:33.763 { 00:31:33.763 "name": "BaseBdev3", 00:31:33.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.763 "is_configured": false, 00:31:33.763 "data_offset": 0, 00:31:33.763 "data_size": 0 00:31:33.763 } 00:31:33.763 ] 00:31:33.763 }' 00:31:33.763 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:33.763 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.022 [2024-10-09 14:00:40.526729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:34.022 [2024-10-09 14:00:40.526903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.022 [2024-10-09 14:00:40.534791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:34.022 [2024-10-09 14:00:40.534827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:34.022 [2024-10-09 14:00:40.534837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:34.022 [2024-10-09 14:00:40.534850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:34.022 [2024-10-09 14:00:40.534857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:34.022 [2024-10-09 14:00:40.534870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.022 [2024-10-09 14:00:40.552361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:34.022 BaseBdev1 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.022 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.281 [ 00:31:34.281 { 00:31:34.281 "name": "BaseBdev1", 00:31:34.281 "aliases": [ 00:31:34.281 "3878a15c-78ab-4da4-b1e1-45f6025feecb" 00:31:34.281 ], 00:31:34.281 "product_name": "Malloc disk", 00:31:34.281 "block_size": 512, 00:31:34.281 "num_blocks": 65536, 00:31:34.281 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:34.281 "assigned_rate_limits": { 00:31:34.281 "rw_ios_per_sec": 0, 00:31:34.281 "rw_mbytes_per_sec": 0, 00:31:34.281 "r_mbytes_per_sec": 0, 00:31:34.281 "w_mbytes_per_sec": 0 00:31:34.281 }, 00:31:34.281 "claimed": true, 00:31:34.281 "claim_type": "exclusive_write", 00:31:34.281 "zoned": false, 00:31:34.281 "supported_io_types": { 00:31:34.281 "read": true, 00:31:34.281 "write": true, 00:31:34.281 "unmap": true, 00:31:34.281 "flush": true, 00:31:34.281 "reset": true, 00:31:34.281 "nvme_admin": false, 00:31:34.281 "nvme_io": false, 00:31:34.281 "nvme_io_md": false, 00:31:34.281 "write_zeroes": true, 00:31:34.281 "zcopy": true, 00:31:34.281 "get_zone_info": false, 00:31:34.281 "zone_management": false, 00:31:34.281 "zone_append": false, 00:31:34.281 "compare": false, 00:31:34.282 "compare_and_write": false, 00:31:34.282 "abort": true, 00:31:34.282 "seek_hole": false, 00:31:34.282 "seek_data": false, 00:31:34.282 "copy": true, 00:31:34.282 "nvme_iov_md": false 00:31:34.282 }, 00:31:34.282 "memory_domains": [ 00:31:34.282 { 00:31:34.282 "dma_device_id": "system", 00:31:34.282 "dma_device_type": 1 00:31:34.282 }, 00:31:34.282 { 00:31:34.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:34.282 "dma_device_type": 2 00:31:34.282 } 00:31:34.282 ], 00:31:34.282 "driver_specific": {} 00:31:34.282 } 00:31:34.282 ] 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.282 "name": "Existed_Raid", 00:31:34.282 "uuid": "c4441ff7-64d7-4a23-b544-35ef9d7adb23", 00:31:34.282 "strip_size_kb": 0, 00:31:34.282 "state": "configuring", 00:31:34.282 "raid_level": "raid1", 00:31:34.282 "superblock": true, 00:31:34.282 "num_base_bdevs": 3, 00:31:34.282 "num_base_bdevs_discovered": 1, 00:31:34.282 "num_base_bdevs_operational": 3, 00:31:34.282 "base_bdevs_list": [ 00:31:34.282 { 00:31:34.282 "name": "BaseBdev1", 00:31:34.282 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:34.282 "is_configured": true, 00:31:34.282 "data_offset": 2048, 00:31:34.282 "data_size": 63488 00:31:34.282 }, 00:31:34.282 { 00:31:34.282 "name": "BaseBdev2", 00:31:34.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.282 "is_configured": false, 00:31:34.282 "data_offset": 0, 00:31:34.282 "data_size": 0 00:31:34.282 }, 00:31:34.282 { 00:31:34.282 "name": "BaseBdev3", 00:31:34.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.282 "is_configured": false, 00:31:34.282 "data_offset": 0, 00:31:34.282 "data_size": 0 00:31:34.282 } 00:31:34.282 ] 00:31:34.282 }' 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.282 14:00:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.540 [2024-10-09 14:00:41.016509] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:34.540 [2024-10-09 14:00:41.016573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.540 [2024-10-09 14:00:41.028568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:34.540 [2024-10-09 14:00:41.031159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:34.540 [2024-10-09 14:00:41.031205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:34.540 [2024-10-09 14:00:41.031217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:34.540 [2024-10-09 14:00:41.031232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.540 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:34.540 "name": "Existed_Raid", 00:31:34.540 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:34.540 "strip_size_kb": 0, 00:31:34.540 "state": "configuring", 00:31:34.541 "raid_level": "raid1", 00:31:34.541 "superblock": true, 00:31:34.541 "num_base_bdevs": 3, 00:31:34.541 "num_base_bdevs_discovered": 1, 00:31:34.541 "num_base_bdevs_operational": 3, 00:31:34.541 "base_bdevs_list": [ 00:31:34.541 { 00:31:34.541 "name": "BaseBdev1", 00:31:34.541 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:34.541 "is_configured": true, 00:31:34.541 "data_offset": 2048, 00:31:34.541 "data_size": 63488 00:31:34.541 }, 00:31:34.541 { 00:31:34.541 "name": "BaseBdev2", 00:31:34.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.541 "is_configured": false, 00:31:34.541 "data_offset": 0, 00:31:34.541 "data_size": 0 00:31:34.541 }, 00:31:34.541 { 00:31:34.541 "name": "BaseBdev3", 00:31:34.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.541 "is_configured": false, 00:31:34.541 "data_offset": 0, 00:31:34.541 "data_size": 0 00:31:34.541 } 00:31:34.541 ] 00:31:34.541 }' 00:31:34.541 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:34.541 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.169 [2024-10-09 14:00:41.465177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:35.169 BaseBdev2 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.169 [ 00:31:35.169 { 00:31:35.169 "name": "BaseBdev2", 00:31:35.169 "aliases": [ 00:31:35.169 "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb" 00:31:35.169 ], 00:31:35.169 "product_name": "Malloc disk", 00:31:35.169 "block_size": 512, 00:31:35.169 "num_blocks": 65536, 00:31:35.169 "uuid": "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb", 00:31:35.169 "assigned_rate_limits": { 00:31:35.169 "rw_ios_per_sec": 0, 00:31:35.169 "rw_mbytes_per_sec": 0, 00:31:35.169 "r_mbytes_per_sec": 0, 00:31:35.169 "w_mbytes_per_sec": 0 00:31:35.169 }, 00:31:35.169 "claimed": true, 00:31:35.169 "claim_type": "exclusive_write", 00:31:35.169 "zoned": false, 00:31:35.169 "supported_io_types": { 00:31:35.169 "read": true, 00:31:35.169 "write": true, 00:31:35.169 "unmap": true, 00:31:35.169 "flush": true, 00:31:35.169 "reset": true, 00:31:35.169 "nvme_admin": false, 00:31:35.169 "nvme_io": false, 00:31:35.169 "nvme_io_md": false, 00:31:35.169 "write_zeroes": true, 00:31:35.169 "zcopy": true, 00:31:35.169 "get_zone_info": false, 00:31:35.169 "zone_management": false, 00:31:35.169 "zone_append": false, 00:31:35.169 "compare": false, 00:31:35.169 "compare_and_write": false, 00:31:35.169 "abort": true, 00:31:35.169 "seek_hole": false, 00:31:35.169 "seek_data": false, 00:31:35.169 "copy": true, 00:31:35.169 "nvme_iov_md": false 00:31:35.169 }, 00:31:35.169 "memory_domains": [ 00:31:35.169 { 00:31:35.169 "dma_device_id": "system", 00:31:35.169 "dma_device_type": 1 00:31:35.169 }, 00:31:35.169 { 00:31:35.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.169 "dma_device_type": 2 00:31:35.169 } 00:31:35.169 ], 00:31:35.169 "driver_specific": {} 00:31:35.169 } 00:31:35.169 ] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.169 "name": "Existed_Raid", 00:31:35.169 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:35.169 "strip_size_kb": 0, 00:31:35.169 "state": "configuring", 00:31:35.169 "raid_level": "raid1", 00:31:35.169 "superblock": true, 00:31:35.169 "num_base_bdevs": 3, 00:31:35.169 "num_base_bdevs_discovered": 2, 00:31:35.169 "num_base_bdevs_operational": 3, 00:31:35.169 "base_bdevs_list": [ 00:31:35.169 { 00:31:35.169 "name": "BaseBdev1", 00:31:35.169 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:35.169 "is_configured": true, 00:31:35.169 "data_offset": 2048, 00:31:35.169 "data_size": 63488 00:31:35.169 }, 00:31:35.169 { 00:31:35.169 "name": "BaseBdev2", 00:31:35.169 "uuid": "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb", 00:31:35.169 "is_configured": true, 00:31:35.169 "data_offset": 2048, 00:31:35.169 "data_size": 63488 00:31:35.169 }, 00:31:35.169 { 00:31:35.169 "name": "BaseBdev3", 00:31:35.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.169 "is_configured": false, 00:31:35.169 "data_offset": 0, 00:31:35.169 "data_size": 0 00:31:35.169 } 00:31:35.169 ] 00:31:35.169 }' 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.169 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.429 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:35.429 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.429 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.429 [2024-10-09 14:00:41.940442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:35.429 [2024-10-09 14:00:41.940663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:35.429 [2024-10-09 14:00:41.940692] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:35.429 BaseBdev3 00:31:35.429 [2024-10-09 14:00:41.940995] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:35.429 [2024-10-09 14:00:41.941141] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:35.429 [2024-10-09 14:00:41.941158] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:31:35.429 [2024-10-09 14:00:41.941278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:35.429 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.429 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.430 [ 00:31:35.430 { 00:31:35.430 "name": "BaseBdev3", 00:31:35.430 "aliases": [ 00:31:35.430 "81fc7acc-8c0b-4521-8076-e1413b49167a" 00:31:35.430 ], 00:31:35.430 "product_name": "Malloc disk", 00:31:35.430 "block_size": 512, 00:31:35.430 "num_blocks": 65536, 00:31:35.430 "uuid": "81fc7acc-8c0b-4521-8076-e1413b49167a", 00:31:35.430 "assigned_rate_limits": { 00:31:35.430 "rw_ios_per_sec": 0, 00:31:35.430 "rw_mbytes_per_sec": 0, 00:31:35.430 "r_mbytes_per_sec": 0, 00:31:35.430 "w_mbytes_per_sec": 0 00:31:35.430 }, 00:31:35.430 "claimed": true, 00:31:35.430 "claim_type": "exclusive_write", 00:31:35.430 "zoned": false, 00:31:35.430 "supported_io_types": { 00:31:35.430 "read": true, 00:31:35.430 "write": true, 00:31:35.430 "unmap": true, 00:31:35.430 "flush": true, 00:31:35.430 "reset": true, 00:31:35.430 "nvme_admin": false, 00:31:35.430 "nvme_io": false, 00:31:35.430 "nvme_io_md": false, 00:31:35.430 "write_zeroes": true, 00:31:35.430 "zcopy": true, 00:31:35.430 "get_zone_info": false, 00:31:35.430 "zone_management": false, 00:31:35.430 "zone_append": false, 00:31:35.430 "compare": false, 00:31:35.430 "compare_and_write": false, 00:31:35.430 "abort": true, 00:31:35.430 "seek_hole": false, 00:31:35.430 "seek_data": false, 00:31:35.430 "copy": true, 00:31:35.430 "nvme_iov_md": false 00:31:35.430 }, 00:31:35.430 "memory_domains": [ 00:31:35.430 { 00:31:35.430 "dma_device_id": "system", 00:31:35.430 "dma_device_type": 1 00:31:35.430 }, 00:31:35.430 { 00:31:35.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.430 "dma_device_type": 2 00:31:35.430 } 00:31:35.430 ], 00:31:35.430 "driver_specific": {} 00:31:35.430 } 00:31:35.430 ] 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:35.430 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.688 14:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.688 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.688 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:35.688 "name": "Existed_Raid", 00:31:35.688 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:35.688 "strip_size_kb": 0, 00:31:35.688 "state": "online", 00:31:35.688 "raid_level": "raid1", 00:31:35.688 "superblock": true, 00:31:35.688 "num_base_bdevs": 3, 00:31:35.688 "num_base_bdevs_discovered": 3, 00:31:35.688 "num_base_bdevs_operational": 3, 00:31:35.688 "base_bdevs_list": [ 00:31:35.688 { 00:31:35.688 "name": "BaseBdev1", 00:31:35.688 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:35.688 "is_configured": true, 00:31:35.688 "data_offset": 2048, 00:31:35.688 "data_size": 63488 00:31:35.688 }, 00:31:35.688 { 00:31:35.688 "name": "BaseBdev2", 00:31:35.688 "uuid": "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb", 00:31:35.688 "is_configured": true, 00:31:35.688 "data_offset": 2048, 00:31:35.688 "data_size": 63488 00:31:35.688 }, 00:31:35.688 { 00:31:35.688 "name": "BaseBdev3", 00:31:35.688 "uuid": "81fc7acc-8c0b-4521-8076-e1413b49167a", 00:31:35.688 "is_configured": true, 00:31:35.688 "data_offset": 2048, 00:31:35.688 "data_size": 63488 00:31:35.688 } 00:31:35.688 ] 00:31:35.688 }' 00:31:35.688 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:35.688 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:35.948 [2024-10-09 14:00:42.433089] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.948 "name": "Existed_Raid", 00:31:35.948 "aliases": [ 00:31:35.948 "c00428a3-c34a-4543-bdb1-45a3687d9ac6" 00:31:35.948 ], 00:31:35.948 "product_name": "Raid Volume", 00:31:35.948 "block_size": 512, 00:31:35.948 "num_blocks": 63488, 00:31:35.948 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:35.948 "assigned_rate_limits": { 00:31:35.948 "rw_ios_per_sec": 0, 00:31:35.948 "rw_mbytes_per_sec": 0, 00:31:35.948 "r_mbytes_per_sec": 0, 00:31:35.948 "w_mbytes_per_sec": 0 00:31:35.948 }, 00:31:35.948 "claimed": false, 00:31:35.948 "zoned": false, 00:31:35.948 "supported_io_types": { 00:31:35.948 "read": true, 00:31:35.948 "write": true, 00:31:35.948 "unmap": false, 00:31:35.948 "flush": false, 00:31:35.948 "reset": true, 00:31:35.948 "nvme_admin": false, 00:31:35.948 "nvme_io": false, 00:31:35.948 "nvme_io_md": false, 00:31:35.948 "write_zeroes": true, 00:31:35.948 "zcopy": false, 00:31:35.948 "get_zone_info": false, 00:31:35.948 "zone_management": false, 00:31:35.948 "zone_append": false, 00:31:35.948 "compare": false, 00:31:35.948 "compare_and_write": false, 00:31:35.948 "abort": false, 00:31:35.948 "seek_hole": false, 00:31:35.948 "seek_data": false, 00:31:35.948 "copy": false, 00:31:35.948 "nvme_iov_md": false 00:31:35.948 }, 00:31:35.948 "memory_domains": [ 00:31:35.948 { 00:31:35.948 "dma_device_id": "system", 00:31:35.948 "dma_device_type": 1 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.948 "dma_device_type": 2 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "dma_device_id": "system", 00:31:35.948 "dma_device_type": 1 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.948 "dma_device_type": 2 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "dma_device_id": "system", 00:31:35.948 "dma_device_type": 1 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:35.948 "dma_device_type": 2 00:31:35.948 } 00:31:35.948 ], 00:31:35.948 "driver_specific": { 00:31:35.948 "raid": { 00:31:35.948 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:35.948 "strip_size_kb": 0, 00:31:35.948 "state": "online", 00:31:35.948 "raid_level": "raid1", 00:31:35.948 "superblock": true, 00:31:35.948 "num_base_bdevs": 3, 00:31:35.948 "num_base_bdevs_discovered": 3, 00:31:35.948 "num_base_bdevs_operational": 3, 00:31:35.948 "base_bdevs_list": [ 00:31:35.948 { 00:31:35.948 "name": "BaseBdev1", 00:31:35.948 "uuid": "3878a15c-78ab-4da4-b1e1-45f6025feecb", 00:31:35.948 "is_configured": true, 00:31:35.948 "data_offset": 2048, 00:31:35.948 "data_size": 63488 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "name": "BaseBdev2", 00:31:35.948 "uuid": "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb", 00:31:35.948 "is_configured": true, 00:31:35.948 "data_offset": 2048, 00:31:35.948 "data_size": 63488 00:31:35.948 }, 00:31:35.948 { 00:31:35.948 "name": "BaseBdev3", 00:31:35.948 "uuid": "81fc7acc-8c0b-4521-8076-e1413b49167a", 00:31:35.948 "is_configured": true, 00:31:35.948 "data_offset": 2048, 00:31:35.948 "data_size": 63488 00:31:35.948 } 00:31:35.948 ] 00:31:35.948 } 00:31:35.948 } 00:31:35.948 }' 00:31:35.948 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:36.208 BaseBdev2 00:31:36.208 BaseBdev3' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.208 [2024-10-09 14:00:42.704727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:36.208 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.467 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.467 "name": "Existed_Raid", 00:31:36.467 "uuid": "c00428a3-c34a-4543-bdb1-45a3687d9ac6", 00:31:36.467 "strip_size_kb": 0, 00:31:36.467 "state": "online", 00:31:36.467 "raid_level": "raid1", 00:31:36.467 "superblock": true, 00:31:36.467 "num_base_bdevs": 3, 00:31:36.467 "num_base_bdevs_discovered": 2, 00:31:36.467 "num_base_bdevs_operational": 2, 00:31:36.467 "base_bdevs_list": [ 00:31:36.467 { 00:31:36.467 "name": null, 00:31:36.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.467 "is_configured": false, 00:31:36.467 "data_offset": 0, 00:31:36.467 "data_size": 63488 00:31:36.467 }, 00:31:36.467 { 00:31:36.467 "name": "BaseBdev2", 00:31:36.467 "uuid": "14f0d4e0-9bc9-4738-8b68-2b5e385cb4eb", 00:31:36.467 "is_configured": true, 00:31:36.467 "data_offset": 2048, 00:31:36.467 "data_size": 63488 00:31:36.467 }, 00:31:36.467 { 00:31:36.467 "name": "BaseBdev3", 00:31:36.467 "uuid": "81fc7acc-8c0b-4521-8076-e1413b49167a", 00:31:36.467 "is_configured": true, 00:31:36.467 "data_offset": 2048, 00:31:36.467 "data_size": 63488 00:31:36.467 } 00:31:36.467 ] 00:31:36.467 }' 00:31:36.467 14:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.467 14:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.726 [2024-10-09 14:00:43.221228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.726 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.727 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.986 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:31:36.986 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:36.986 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:31:36.986 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.986 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.986 [2024-10-09 14:00:43.289595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:36.986 [2024-10-09 14:00:43.289712] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:36.986 [2024-10-09 14:00:43.302258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:36.986 [2024-10-09 14:00:43.302497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:36.987 [2024-10-09 14:00:43.302643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 BaseBdev2 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 [ 00:31:36.987 { 00:31:36.987 "name": "BaseBdev2", 00:31:36.987 "aliases": [ 00:31:36.987 "190a0688-8cfc-490e-b397-937316437bd6" 00:31:36.987 ], 00:31:36.987 "product_name": "Malloc disk", 00:31:36.987 "block_size": 512, 00:31:36.987 "num_blocks": 65536, 00:31:36.987 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:36.987 "assigned_rate_limits": { 00:31:36.987 "rw_ios_per_sec": 0, 00:31:36.987 "rw_mbytes_per_sec": 0, 00:31:36.987 "r_mbytes_per_sec": 0, 00:31:36.987 "w_mbytes_per_sec": 0 00:31:36.987 }, 00:31:36.987 "claimed": false, 00:31:36.987 "zoned": false, 00:31:36.987 "supported_io_types": { 00:31:36.987 "read": true, 00:31:36.987 "write": true, 00:31:36.987 "unmap": true, 00:31:36.987 "flush": true, 00:31:36.987 "reset": true, 00:31:36.987 "nvme_admin": false, 00:31:36.987 "nvme_io": false, 00:31:36.987 "nvme_io_md": false, 00:31:36.987 "write_zeroes": true, 00:31:36.987 "zcopy": true, 00:31:36.987 "get_zone_info": false, 00:31:36.987 "zone_management": false, 00:31:36.987 "zone_append": false, 00:31:36.987 "compare": false, 00:31:36.987 "compare_and_write": false, 00:31:36.987 "abort": true, 00:31:36.987 "seek_hole": false, 00:31:36.987 "seek_data": false, 00:31:36.987 "copy": true, 00:31:36.987 "nvme_iov_md": false 00:31:36.987 }, 00:31:36.987 "memory_domains": [ 00:31:36.987 { 00:31:36.987 "dma_device_id": "system", 00:31:36.987 "dma_device_type": 1 00:31:36.987 }, 00:31:36.987 { 00:31:36.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.987 "dma_device_type": 2 00:31:36.987 } 00:31:36.987 ], 00:31:36.987 "driver_specific": {} 00:31:36.987 } 00:31:36.987 ] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 BaseBdev3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 [ 00:31:36.987 { 00:31:36.987 "name": "BaseBdev3", 00:31:36.987 "aliases": [ 00:31:36.987 "93947667-5913-4cbe-b9f1-452c41c2b070" 00:31:36.987 ], 00:31:36.987 "product_name": "Malloc disk", 00:31:36.987 "block_size": 512, 00:31:36.987 "num_blocks": 65536, 00:31:36.987 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:36.987 "assigned_rate_limits": { 00:31:36.987 "rw_ios_per_sec": 0, 00:31:36.987 "rw_mbytes_per_sec": 0, 00:31:36.987 "r_mbytes_per_sec": 0, 00:31:36.987 "w_mbytes_per_sec": 0 00:31:36.987 }, 00:31:36.987 "claimed": false, 00:31:36.987 "zoned": false, 00:31:36.987 "supported_io_types": { 00:31:36.987 "read": true, 00:31:36.987 "write": true, 00:31:36.987 "unmap": true, 00:31:36.987 "flush": true, 00:31:36.987 "reset": true, 00:31:36.987 "nvme_admin": false, 00:31:36.987 "nvme_io": false, 00:31:36.987 "nvme_io_md": false, 00:31:36.987 "write_zeroes": true, 00:31:36.987 "zcopy": true, 00:31:36.987 "get_zone_info": false, 00:31:36.987 "zone_management": false, 00:31:36.987 "zone_append": false, 00:31:36.987 "compare": false, 00:31:36.987 "compare_and_write": false, 00:31:36.987 "abort": true, 00:31:36.987 "seek_hole": false, 00:31:36.987 "seek_data": false, 00:31:36.987 "copy": true, 00:31:36.987 "nvme_iov_md": false 00:31:36.987 }, 00:31:36.987 "memory_domains": [ 00:31:36.987 { 00:31:36.987 "dma_device_id": "system", 00:31:36.987 "dma_device_type": 1 00:31:36.987 }, 00:31:36.987 { 00:31:36.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.987 "dma_device_type": 2 00:31:36.987 } 00:31:36.987 ], 00:31:36.987 "driver_specific": {} 00:31:36.987 } 00:31:36.987 ] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.987 [2024-10-09 14:00:43.459464] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:36.987 [2024-10-09 14:00:43.459516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:36.987 [2024-10-09 14:00:43.459535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:36.987 [2024-10-09 14:00:43.461753] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:36.987 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:36.988 "name": "Existed_Raid", 00:31:36.988 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:36.988 "strip_size_kb": 0, 00:31:36.988 "state": "configuring", 00:31:36.988 "raid_level": "raid1", 00:31:36.988 "superblock": true, 00:31:36.988 "num_base_bdevs": 3, 00:31:36.988 "num_base_bdevs_discovered": 2, 00:31:36.988 "num_base_bdevs_operational": 3, 00:31:36.988 "base_bdevs_list": [ 00:31:36.988 { 00:31:36.988 "name": "BaseBdev1", 00:31:36.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:36.988 "is_configured": false, 00:31:36.988 "data_offset": 0, 00:31:36.988 "data_size": 0 00:31:36.988 }, 00:31:36.988 { 00:31:36.988 "name": "BaseBdev2", 00:31:36.988 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:36.988 "is_configured": true, 00:31:36.988 "data_offset": 2048, 00:31:36.988 "data_size": 63488 00:31:36.988 }, 00:31:36.988 { 00:31:36.988 "name": "BaseBdev3", 00:31:36.988 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:36.988 "is_configured": true, 00:31:36.988 "data_offset": 2048, 00:31:36.988 "data_size": 63488 00:31:36.988 } 00:31:36.988 ] 00:31:36.988 }' 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:36.988 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.556 [2024-10-09 14:00:43.923561] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:37.556 "name": "Existed_Raid", 00:31:37.556 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:37.556 "strip_size_kb": 0, 00:31:37.556 "state": "configuring", 00:31:37.556 "raid_level": "raid1", 00:31:37.556 "superblock": true, 00:31:37.556 "num_base_bdevs": 3, 00:31:37.556 "num_base_bdevs_discovered": 1, 00:31:37.556 "num_base_bdevs_operational": 3, 00:31:37.556 "base_bdevs_list": [ 00:31:37.556 { 00:31:37.556 "name": "BaseBdev1", 00:31:37.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:37.556 "is_configured": false, 00:31:37.556 "data_offset": 0, 00:31:37.556 "data_size": 0 00:31:37.556 }, 00:31:37.556 { 00:31:37.556 "name": null, 00:31:37.556 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:37.556 "is_configured": false, 00:31:37.556 "data_offset": 0, 00:31:37.556 "data_size": 63488 00:31:37.556 }, 00:31:37.556 { 00:31:37.556 "name": "BaseBdev3", 00:31:37.556 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:37.556 "is_configured": true, 00:31:37.556 "data_offset": 2048, 00:31:37.556 "data_size": 63488 00:31:37.556 } 00:31:37.556 ] 00:31:37.556 }' 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:37.556 14:00:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:37.816 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:37.816 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:37.816 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.816 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.078 [2024-10-09 14:00:44.414744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:38.078 BaseBdev1 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.078 [ 00:31:38.078 { 00:31:38.078 "name": "BaseBdev1", 00:31:38.078 "aliases": [ 00:31:38.078 "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf" 00:31:38.078 ], 00:31:38.078 "product_name": "Malloc disk", 00:31:38.078 "block_size": 512, 00:31:38.078 "num_blocks": 65536, 00:31:38.078 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:38.078 "assigned_rate_limits": { 00:31:38.078 "rw_ios_per_sec": 0, 00:31:38.078 "rw_mbytes_per_sec": 0, 00:31:38.078 "r_mbytes_per_sec": 0, 00:31:38.078 "w_mbytes_per_sec": 0 00:31:38.078 }, 00:31:38.078 "claimed": true, 00:31:38.078 "claim_type": "exclusive_write", 00:31:38.078 "zoned": false, 00:31:38.078 "supported_io_types": { 00:31:38.078 "read": true, 00:31:38.078 "write": true, 00:31:38.078 "unmap": true, 00:31:38.078 "flush": true, 00:31:38.078 "reset": true, 00:31:38.078 "nvme_admin": false, 00:31:38.078 "nvme_io": false, 00:31:38.078 "nvme_io_md": false, 00:31:38.078 "write_zeroes": true, 00:31:38.078 "zcopy": true, 00:31:38.078 "get_zone_info": false, 00:31:38.078 "zone_management": false, 00:31:38.078 "zone_append": false, 00:31:38.078 "compare": false, 00:31:38.078 "compare_and_write": false, 00:31:38.078 "abort": true, 00:31:38.078 "seek_hole": false, 00:31:38.078 "seek_data": false, 00:31:38.078 "copy": true, 00:31:38.078 "nvme_iov_md": false 00:31:38.078 }, 00:31:38.078 "memory_domains": [ 00:31:38.078 { 00:31:38.078 "dma_device_id": "system", 00:31:38.078 "dma_device_type": 1 00:31:38.078 }, 00:31:38.078 { 00:31:38.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:38.078 "dma_device_type": 2 00:31:38.078 } 00:31:38.078 ], 00:31:38.078 "driver_specific": {} 00:31:38.078 } 00:31:38.078 ] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:38.078 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:38.079 "name": "Existed_Raid", 00:31:38.079 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:38.079 "strip_size_kb": 0, 00:31:38.079 "state": "configuring", 00:31:38.079 "raid_level": "raid1", 00:31:38.079 "superblock": true, 00:31:38.079 "num_base_bdevs": 3, 00:31:38.079 "num_base_bdevs_discovered": 2, 00:31:38.079 "num_base_bdevs_operational": 3, 00:31:38.079 "base_bdevs_list": [ 00:31:38.079 { 00:31:38.079 "name": "BaseBdev1", 00:31:38.079 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:38.079 "is_configured": true, 00:31:38.079 "data_offset": 2048, 00:31:38.079 "data_size": 63488 00:31:38.079 }, 00:31:38.079 { 00:31:38.079 "name": null, 00:31:38.079 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:38.079 "is_configured": false, 00:31:38.079 "data_offset": 0, 00:31:38.079 "data_size": 63488 00:31:38.079 }, 00:31:38.079 { 00:31:38.079 "name": "BaseBdev3", 00:31:38.079 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:38.079 "is_configured": true, 00:31:38.079 "data_offset": 2048, 00:31:38.079 "data_size": 63488 00:31:38.079 } 00:31:38.079 ] 00:31:38.079 }' 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:38.079 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.647 [2024-10-09 14:00:44.954905] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.647 14:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.647 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:38.647 "name": "Existed_Raid", 00:31:38.647 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:38.647 "strip_size_kb": 0, 00:31:38.647 "state": "configuring", 00:31:38.647 "raid_level": "raid1", 00:31:38.647 "superblock": true, 00:31:38.647 "num_base_bdevs": 3, 00:31:38.647 "num_base_bdevs_discovered": 1, 00:31:38.647 "num_base_bdevs_operational": 3, 00:31:38.647 "base_bdevs_list": [ 00:31:38.647 { 00:31:38.647 "name": "BaseBdev1", 00:31:38.647 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:38.647 "is_configured": true, 00:31:38.647 "data_offset": 2048, 00:31:38.647 "data_size": 63488 00:31:38.647 }, 00:31:38.647 { 00:31:38.647 "name": null, 00:31:38.647 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:38.647 "is_configured": false, 00:31:38.647 "data_offset": 0, 00:31:38.647 "data_size": 63488 00:31:38.647 }, 00:31:38.647 { 00:31:38.647 "name": null, 00:31:38.647 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:38.647 "is_configured": false, 00:31:38.647 "data_offset": 0, 00:31:38.647 "data_size": 63488 00:31:38.647 } 00:31:38.647 ] 00:31:38.647 }' 00:31:38.647 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:38.647 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.906 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:38.906 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:38.906 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.906 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:38.906 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.165 [2024-10-09 14:00:45.463083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:39.165 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.166 "name": "Existed_Raid", 00:31:39.166 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:39.166 "strip_size_kb": 0, 00:31:39.166 "state": "configuring", 00:31:39.166 "raid_level": "raid1", 00:31:39.166 "superblock": true, 00:31:39.166 "num_base_bdevs": 3, 00:31:39.166 "num_base_bdevs_discovered": 2, 00:31:39.166 "num_base_bdevs_operational": 3, 00:31:39.166 "base_bdevs_list": [ 00:31:39.166 { 00:31:39.166 "name": "BaseBdev1", 00:31:39.166 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:39.166 "is_configured": true, 00:31:39.166 "data_offset": 2048, 00:31:39.166 "data_size": 63488 00:31:39.166 }, 00:31:39.166 { 00:31:39.166 "name": null, 00:31:39.166 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:39.166 "is_configured": false, 00:31:39.166 "data_offset": 0, 00:31:39.166 "data_size": 63488 00:31:39.166 }, 00:31:39.166 { 00:31:39.166 "name": "BaseBdev3", 00:31:39.166 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:39.166 "is_configured": true, 00:31:39.166 "data_offset": 2048, 00:31:39.166 "data_size": 63488 00:31:39.166 } 00:31:39.166 ] 00:31:39.166 }' 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.166 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.424 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.424 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:39.424 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.424 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.424 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.682 [2024-10-09 14:00:45.979193] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.682 14:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.682 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.682 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.682 "name": "Existed_Raid", 00:31:39.682 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:39.682 "strip_size_kb": 0, 00:31:39.682 "state": "configuring", 00:31:39.682 "raid_level": "raid1", 00:31:39.682 "superblock": true, 00:31:39.682 "num_base_bdevs": 3, 00:31:39.682 "num_base_bdevs_discovered": 1, 00:31:39.682 "num_base_bdevs_operational": 3, 00:31:39.682 "base_bdevs_list": [ 00:31:39.682 { 00:31:39.682 "name": null, 00:31:39.682 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:39.682 "is_configured": false, 00:31:39.682 "data_offset": 0, 00:31:39.682 "data_size": 63488 00:31:39.682 }, 00:31:39.682 { 00:31:39.682 "name": null, 00:31:39.682 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:39.682 "is_configured": false, 00:31:39.682 "data_offset": 0, 00:31:39.682 "data_size": 63488 00:31:39.682 }, 00:31:39.682 { 00:31:39.682 "name": "BaseBdev3", 00:31:39.682 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:39.682 "is_configured": true, 00:31:39.682 "data_offset": 2048, 00:31:39.682 "data_size": 63488 00:31:39.682 } 00:31:39.682 ] 00:31:39.682 }' 00:31:39.682 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.682 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.942 [2024-10-09 14:00:46.462176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:39.942 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.200 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.200 "name": "Existed_Raid", 00:31:40.200 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:40.200 "strip_size_kb": 0, 00:31:40.200 "state": "configuring", 00:31:40.200 "raid_level": "raid1", 00:31:40.200 "superblock": true, 00:31:40.200 "num_base_bdevs": 3, 00:31:40.200 "num_base_bdevs_discovered": 2, 00:31:40.200 "num_base_bdevs_operational": 3, 00:31:40.200 "base_bdevs_list": [ 00:31:40.200 { 00:31:40.200 "name": null, 00:31:40.200 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:40.200 "is_configured": false, 00:31:40.200 "data_offset": 0, 00:31:40.200 "data_size": 63488 00:31:40.200 }, 00:31:40.200 { 00:31:40.200 "name": "BaseBdev2", 00:31:40.200 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:40.200 "is_configured": true, 00:31:40.200 "data_offset": 2048, 00:31:40.200 "data_size": 63488 00:31:40.200 }, 00:31:40.200 { 00:31:40.200 "name": "BaseBdev3", 00:31:40.200 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:40.200 "is_configured": true, 00:31:40.200 "data_offset": 2048, 00:31:40.200 "data_size": 63488 00:31:40.200 } 00:31:40.200 ] 00:31:40.200 }' 00:31:40.200 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.200 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:40.459 14:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.459 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a005dee6-b3ce-43d4-a6d0-fb67c076ecdf 00:31:40.459 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.459 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.718 [2024-10-09 14:00:47.013792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:40.718 [2024-10-09 14:00:47.013964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:40.718 [2024-10-09 14:00:47.013979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:40.718 [2024-10-09 14:00:47.014242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:40.718 NewBaseBdev 00:31:40.718 [2024-10-09 14:00:47.014374] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:40.718 [2024-10-09 14:00:47.014391] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:31:40.718 [2024-10-09 14:00:47.014504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.718 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.718 [ 00:31:40.718 { 00:31:40.718 "name": "NewBaseBdev", 00:31:40.718 "aliases": [ 00:31:40.718 "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf" 00:31:40.718 ], 00:31:40.718 "product_name": "Malloc disk", 00:31:40.718 "block_size": 512, 00:31:40.718 "num_blocks": 65536, 00:31:40.718 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:40.718 "assigned_rate_limits": { 00:31:40.718 "rw_ios_per_sec": 0, 00:31:40.718 "rw_mbytes_per_sec": 0, 00:31:40.718 "r_mbytes_per_sec": 0, 00:31:40.718 "w_mbytes_per_sec": 0 00:31:40.718 }, 00:31:40.718 "claimed": true, 00:31:40.718 "claim_type": "exclusive_write", 00:31:40.718 "zoned": false, 00:31:40.718 "supported_io_types": { 00:31:40.718 "read": true, 00:31:40.718 "write": true, 00:31:40.718 "unmap": true, 00:31:40.718 "flush": true, 00:31:40.718 "reset": true, 00:31:40.718 "nvme_admin": false, 00:31:40.718 "nvme_io": false, 00:31:40.718 "nvme_io_md": false, 00:31:40.718 "write_zeroes": true, 00:31:40.718 "zcopy": true, 00:31:40.718 "get_zone_info": false, 00:31:40.718 "zone_management": false, 00:31:40.718 "zone_append": false, 00:31:40.719 "compare": false, 00:31:40.719 "compare_and_write": false, 00:31:40.719 "abort": true, 00:31:40.719 "seek_hole": false, 00:31:40.719 "seek_data": false, 00:31:40.719 "copy": true, 00:31:40.719 "nvme_iov_md": false 00:31:40.719 }, 00:31:40.719 "memory_domains": [ 00:31:40.719 { 00:31:40.719 "dma_device_id": "system", 00:31:40.719 "dma_device_type": 1 00:31:40.719 }, 00:31:40.719 { 00:31:40.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.719 "dma_device_type": 2 00:31:40.719 } 00:31:40.719 ], 00:31:40.719 "driver_specific": {} 00:31:40.719 } 00:31:40.719 ] 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.719 "name": "Existed_Raid", 00:31:40.719 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:40.719 "strip_size_kb": 0, 00:31:40.719 "state": "online", 00:31:40.719 "raid_level": "raid1", 00:31:40.719 "superblock": true, 00:31:40.719 "num_base_bdevs": 3, 00:31:40.719 "num_base_bdevs_discovered": 3, 00:31:40.719 "num_base_bdevs_operational": 3, 00:31:40.719 "base_bdevs_list": [ 00:31:40.719 { 00:31:40.719 "name": "NewBaseBdev", 00:31:40.719 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:40.719 "is_configured": true, 00:31:40.719 "data_offset": 2048, 00:31:40.719 "data_size": 63488 00:31:40.719 }, 00:31:40.719 { 00:31:40.719 "name": "BaseBdev2", 00:31:40.719 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:40.719 "is_configured": true, 00:31:40.719 "data_offset": 2048, 00:31:40.719 "data_size": 63488 00:31:40.719 }, 00:31:40.719 { 00:31:40.719 "name": "BaseBdev3", 00:31:40.719 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:40.719 "is_configured": true, 00:31:40.719 "data_offset": 2048, 00:31:40.719 "data_size": 63488 00:31:40.719 } 00:31:40.719 ] 00:31:40.719 }' 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.719 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:40.978 [2024-10-09 14:00:47.482254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.978 "name": "Existed_Raid", 00:31:40.978 "aliases": [ 00:31:40.978 "1087b6a1-0b2b-48b5-b0e7-95f3739d6713" 00:31:40.978 ], 00:31:40.978 "product_name": "Raid Volume", 00:31:40.978 "block_size": 512, 00:31:40.978 "num_blocks": 63488, 00:31:40.978 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:40.978 "assigned_rate_limits": { 00:31:40.978 "rw_ios_per_sec": 0, 00:31:40.978 "rw_mbytes_per_sec": 0, 00:31:40.978 "r_mbytes_per_sec": 0, 00:31:40.978 "w_mbytes_per_sec": 0 00:31:40.978 }, 00:31:40.978 "claimed": false, 00:31:40.978 "zoned": false, 00:31:40.978 "supported_io_types": { 00:31:40.978 "read": true, 00:31:40.978 "write": true, 00:31:40.978 "unmap": false, 00:31:40.978 "flush": false, 00:31:40.978 "reset": true, 00:31:40.978 "nvme_admin": false, 00:31:40.978 "nvme_io": false, 00:31:40.978 "nvme_io_md": false, 00:31:40.978 "write_zeroes": true, 00:31:40.978 "zcopy": false, 00:31:40.978 "get_zone_info": false, 00:31:40.978 "zone_management": false, 00:31:40.978 "zone_append": false, 00:31:40.978 "compare": false, 00:31:40.978 "compare_and_write": false, 00:31:40.978 "abort": false, 00:31:40.978 "seek_hole": false, 00:31:40.978 "seek_data": false, 00:31:40.978 "copy": false, 00:31:40.978 "nvme_iov_md": false 00:31:40.978 }, 00:31:40.978 "memory_domains": [ 00:31:40.978 { 00:31:40.978 "dma_device_id": "system", 00:31:40.978 "dma_device_type": 1 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.978 "dma_device_type": 2 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "dma_device_id": "system", 00:31:40.978 "dma_device_type": 1 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.978 "dma_device_type": 2 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "dma_device_id": "system", 00:31:40.978 "dma_device_type": 1 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:40.978 "dma_device_type": 2 00:31:40.978 } 00:31:40.978 ], 00:31:40.978 "driver_specific": { 00:31:40.978 "raid": { 00:31:40.978 "uuid": "1087b6a1-0b2b-48b5-b0e7-95f3739d6713", 00:31:40.978 "strip_size_kb": 0, 00:31:40.978 "state": "online", 00:31:40.978 "raid_level": "raid1", 00:31:40.978 "superblock": true, 00:31:40.978 "num_base_bdevs": 3, 00:31:40.978 "num_base_bdevs_discovered": 3, 00:31:40.978 "num_base_bdevs_operational": 3, 00:31:40.978 "base_bdevs_list": [ 00:31:40.978 { 00:31:40.978 "name": "NewBaseBdev", 00:31:40.978 "uuid": "a005dee6-b3ce-43d4-a6d0-fb67c076ecdf", 00:31:40.978 "is_configured": true, 00:31:40.978 "data_offset": 2048, 00:31:40.978 "data_size": 63488 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "name": "BaseBdev2", 00:31:40.978 "uuid": "190a0688-8cfc-490e-b397-937316437bd6", 00:31:40.978 "is_configured": true, 00:31:40.978 "data_offset": 2048, 00:31:40.978 "data_size": 63488 00:31:40.978 }, 00:31:40.978 { 00:31:40.978 "name": "BaseBdev3", 00:31:40.978 "uuid": "93947667-5913-4cbe-b9f1-452c41c2b070", 00:31:40.978 "is_configured": true, 00:31:40.978 "data_offset": 2048, 00:31:40.978 "data_size": 63488 00:31:40.978 } 00:31:40.978 ] 00:31:40.978 } 00:31:40.978 } 00:31:40.978 }' 00:31:40.978 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:31:41.237 BaseBdev2 00:31:41.237 BaseBdev3' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.237 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.238 [2024-10-09 14:00:47.737990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:41.238 [2024-10-09 14:00:47.738126] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:41.238 [2024-10-09 14:00:47.738216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:41.238 [2024-10-09 14:00:47.738467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:41.238 [2024-10-09 14:00:47.738480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79404 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79404 ']' 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79404 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79404 00:31:41.238 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:41.497 killing process with pid 79404 00:31:41.497 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:41.497 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79404' 00:31:41.497 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79404 00:31:41.497 [2024-10-09 14:00:47.787569] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:41.497 14:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79404 00:31:41.497 [2024-10-09 14:00:47.828121] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:41.756 14:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:31:41.756 00:31:41.756 real 0m8.946s 00:31:41.756 user 0m15.384s 00:31:41.756 sys 0m1.874s 00:31:41.756 ************************************ 00:31:41.756 END TEST raid_state_function_test_sb 00:31:41.756 ************************************ 00:31:41.756 14:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.756 14:00:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.756 14:00:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:31:41.756 14:00:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:41.756 14:00:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.756 14:00:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:41.756 ************************************ 00:31:41.756 START TEST raid_superblock_test 00:31:41.756 ************************************ 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80008 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80008 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80008 ']' 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:41.756 14:00:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.756 [2024-10-09 14:00:48.235799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:41.756 [2024-10-09 14:00:48.235933] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80008 ] 00:31:42.015 [2024-10-09 14:00:48.394961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.015 [2024-10-09 14:00:48.440664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.015 [2024-10-09 14:00:48.484539] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.015 [2024-10-09 14:00:48.484598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 malloc1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 [2024-10-09 14:00:49.229126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:42.955 [2024-10-09 14:00:49.229330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.955 [2024-10-09 14:00:49.229396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:42.955 [2024-10-09 14:00:49.229500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.955 [2024-10-09 14:00:49.232054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.955 [2024-10-09 14:00:49.232209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:42.955 pt1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 malloc2 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 [2024-10-09 14:00:49.266428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:42.955 [2024-10-09 14:00:49.266621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.955 [2024-10-09 14:00:49.266650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:42.955 [2024-10-09 14:00:49.266667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.955 [2024-10-09 14:00:49.269502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.955 [2024-10-09 14:00:49.269571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:42.955 pt2 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 malloc3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 [2024-10-09 14:00:49.291531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:42.955 [2024-10-09 14:00:49.291598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:42.955 [2024-10-09 14:00:49.291619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:42.955 [2024-10-09 14:00:49.291634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:42.955 [2024-10-09 14:00:49.294335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:42.955 [2024-10-09 14:00:49.294381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:42.955 pt3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.955 [2024-10-09 14:00:49.303599] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:42.955 [2024-10-09 14:00:49.305971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:42.955 [2024-10-09 14:00:49.306042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:42.955 [2024-10-09 14:00:49.306195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:31:42.955 [2024-10-09 14:00:49.306209] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:42.955 [2024-10-09 14:00:49.306505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:42.955 [2024-10-09 14:00:49.306660] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:31:42.955 [2024-10-09 14:00:49.306685] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:31:42.955 [2024-10-09 14:00:49.306828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:42.955 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:42.956 "name": "raid_bdev1", 00:31:42.956 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:42.956 "strip_size_kb": 0, 00:31:42.956 "state": "online", 00:31:42.956 "raid_level": "raid1", 00:31:42.956 "superblock": true, 00:31:42.956 "num_base_bdevs": 3, 00:31:42.956 "num_base_bdevs_discovered": 3, 00:31:42.956 "num_base_bdevs_operational": 3, 00:31:42.956 "base_bdevs_list": [ 00:31:42.956 { 00:31:42.956 "name": "pt1", 00:31:42.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:42.956 "is_configured": true, 00:31:42.956 "data_offset": 2048, 00:31:42.956 "data_size": 63488 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "name": "pt2", 00:31:42.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:42.956 "is_configured": true, 00:31:42.956 "data_offset": 2048, 00:31:42.956 "data_size": 63488 00:31:42.956 }, 00:31:42.956 { 00:31:42.956 "name": "pt3", 00:31:42.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:42.956 "is_configured": true, 00:31:42.956 "data_offset": 2048, 00:31:42.956 "data_size": 63488 00:31:42.956 } 00:31:42.956 ] 00:31:42.956 }' 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:42.956 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.231 [2024-10-09 14:00:49.752018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:43.231 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.491 "name": "raid_bdev1", 00:31:43.491 "aliases": [ 00:31:43.491 "24d81462-872e-432c-a9b9-044e6bccd25c" 00:31:43.491 ], 00:31:43.491 "product_name": "Raid Volume", 00:31:43.491 "block_size": 512, 00:31:43.491 "num_blocks": 63488, 00:31:43.491 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:43.491 "assigned_rate_limits": { 00:31:43.491 "rw_ios_per_sec": 0, 00:31:43.491 "rw_mbytes_per_sec": 0, 00:31:43.491 "r_mbytes_per_sec": 0, 00:31:43.491 "w_mbytes_per_sec": 0 00:31:43.491 }, 00:31:43.491 "claimed": false, 00:31:43.491 "zoned": false, 00:31:43.491 "supported_io_types": { 00:31:43.491 "read": true, 00:31:43.491 "write": true, 00:31:43.491 "unmap": false, 00:31:43.491 "flush": false, 00:31:43.491 "reset": true, 00:31:43.491 "nvme_admin": false, 00:31:43.491 "nvme_io": false, 00:31:43.491 "nvme_io_md": false, 00:31:43.491 "write_zeroes": true, 00:31:43.491 "zcopy": false, 00:31:43.491 "get_zone_info": false, 00:31:43.491 "zone_management": false, 00:31:43.491 "zone_append": false, 00:31:43.491 "compare": false, 00:31:43.491 "compare_and_write": false, 00:31:43.491 "abort": false, 00:31:43.491 "seek_hole": false, 00:31:43.491 "seek_data": false, 00:31:43.491 "copy": false, 00:31:43.491 "nvme_iov_md": false 00:31:43.491 }, 00:31:43.491 "memory_domains": [ 00:31:43.491 { 00:31:43.491 "dma_device_id": "system", 00:31:43.491 "dma_device_type": 1 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:43.491 "dma_device_type": 2 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "dma_device_id": "system", 00:31:43.491 "dma_device_type": 1 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:43.491 "dma_device_type": 2 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "dma_device_id": "system", 00:31:43.491 "dma_device_type": 1 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:43.491 "dma_device_type": 2 00:31:43.491 } 00:31:43.491 ], 00:31:43.491 "driver_specific": { 00:31:43.491 "raid": { 00:31:43.491 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:43.491 "strip_size_kb": 0, 00:31:43.491 "state": "online", 00:31:43.491 "raid_level": "raid1", 00:31:43.491 "superblock": true, 00:31:43.491 "num_base_bdevs": 3, 00:31:43.491 "num_base_bdevs_discovered": 3, 00:31:43.491 "num_base_bdevs_operational": 3, 00:31:43.491 "base_bdevs_list": [ 00:31:43.491 { 00:31:43.491 "name": "pt1", 00:31:43.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:43.491 "is_configured": true, 00:31:43.491 "data_offset": 2048, 00:31:43.491 "data_size": 63488 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "name": "pt2", 00:31:43.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:43.491 "is_configured": true, 00:31:43.491 "data_offset": 2048, 00:31:43.491 "data_size": 63488 00:31:43.491 }, 00:31:43.491 { 00:31:43.491 "name": "pt3", 00:31:43.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:43.491 "is_configured": true, 00:31:43.491 "data_offset": 2048, 00:31:43.491 "data_size": 63488 00:31:43.491 } 00:31:43.491 ] 00:31:43.491 } 00:31:43.491 } 00:31:43.491 }' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:43.491 pt2 00:31:43.491 pt3' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.491 14:00:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.491 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.491 [2024-10-09 14:00:50.016054] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24d81462-872e-432c-a9b9-044e6bccd25c 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24d81462-872e-432c-a9b9-044e6bccd25c ']' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 [2024-10-09 14:00:50.059759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.751 [2024-10-09 14:00:50.059791] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:43.751 [2024-10-09 14:00:50.059882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:43.751 [2024-10-09 14:00:50.059968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:43.751 [2024-10-09 14:00:50.059986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 [2024-10-09 14:00:50.191809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:43.751 [2024-10-09 14:00:50.194379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:43.751 [2024-10-09 14:00:50.194441] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:43.751 [2024-10-09 14:00:50.194504] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:43.751 [2024-10-09 14:00:50.194571] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:43.751 [2024-10-09 14:00:50.194616] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:43.751 [2024-10-09 14:00:50.194636] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.751 [2024-10-09 14:00:50.194660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:31:43.751 request: 00:31:43.751 { 00:31:43.751 "name": "raid_bdev1", 00:31:43.751 "raid_level": "raid1", 00:31:43.751 "base_bdevs": [ 00:31:43.751 "malloc1", 00:31:43.751 "malloc2", 00:31:43.751 "malloc3" 00:31:43.751 ], 00:31:43.751 "superblock": false, 00:31:43.751 "method": "bdev_raid_create", 00:31:43.751 "req_id": 1 00:31:43.751 } 00:31:43.751 Got JSON-RPC error response 00:31:43.751 response: 00:31:43.751 { 00:31:43.751 "code": -17, 00:31:43.751 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:43.751 } 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 [2024-10-09 14:00:50.247785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:43.751 [2024-10-09 14:00:50.247851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:43.751 [2024-10-09 14:00:50.247875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:43.751 [2024-10-09 14:00:50.247890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:43.751 [2024-10-09 14:00:50.250662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:43.751 [2024-10-09 14:00:50.250705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:43.751 [2024-10-09 14:00:50.250803] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:43.751 [2024-10-09 14:00:50.250851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:43.751 pt1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:43.751 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:43.751 "name": "raid_bdev1", 00:31:43.752 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:43.752 "strip_size_kb": 0, 00:31:43.752 "state": "configuring", 00:31:43.752 "raid_level": "raid1", 00:31:43.752 "superblock": true, 00:31:43.752 "num_base_bdevs": 3, 00:31:43.752 "num_base_bdevs_discovered": 1, 00:31:43.752 "num_base_bdevs_operational": 3, 00:31:43.752 "base_bdevs_list": [ 00:31:43.752 { 00:31:43.752 "name": "pt1", 00:31:43.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:43.752 "is_configured": true, 00:31:43.752 "data_offset": 2048, 00:31:43.752 "data_size": 63488 00:31:43.752 }, 00:31:43.752 { 00:31:43.752 "name": null, 00:31:43.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:43.752 "is_configured": false, 00:31:43.752 "data_offset": 2048, 00:31:43.752 "data_size": 63488 00:31:43.752 }, 00:31:43.752 { 00:31:43.752 "name": null, 00:31:43.752 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:43.752 "is_configured": false, 00:31:43.752 "data_offset": 2048, 00:31:43.752 "data_size": 63488 00:31:43.752 } 00:31:43.752 ] 00:31:43.752 }' 00:31:43.752 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:43.752 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.320 [2024-10-09 14:00:50.715920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:44.320 [2024-10-09 14:00:50.715990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.320 [2024-10-09 14:00:50.716014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:44.320 [2024-10-09 14:00:50.716032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.320 [2024-10-09 14:00:50.716452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.320 [2024-10-09 14:00:50.716480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:44.320 [2024-10-09 14:00:50.716573] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:44.320 [2024-10-09 14:00:50.716601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:44.320 pt2 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.320 [2024-10-09 14:00:50.723911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.320 "name": "raid_bdev1", 00:31:44.320 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:44.320 "strip_size_kb": 0, 00:31:44.320 "state": "configuring", 00:31:44.320 "raid_level": "raid1", 00:31:44.320 "superblock": true, 00:31:44.320 "num_base_bdevs": 3, 00:31:44.320 "num_base_bdevs_discovered": 1, 00:31:44.320 "num_base_bdevs_operational": 3, 00:31:44.320 "base_bdevs_list": [ 00:31:44.320 { 00:31:44.320 "name": "pt1", 00:31:44.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:44.320 "is_configured": true, 00:31:44.320 "data_offset": 2048, 00:31:44.320 "data_size": 63488 00:31:44.320 }, 00:31:44.320 { 00:31:44.320 "name": null, 00:31:44.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:44.320 "is_configured": false, 00:31:44.320 "data_offset": 0, 00:31:44.320 "data_size": 63488 00:31:44.320 }, 00:31:44.320 { 00:31:44.320 "name": null, 00:31:44.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:44.320 "is_configured": false, 00:31:44.320 "data_offset": 2048, 00:31:44.320 "data_size": 63488 00:31:44.320 } 00:31:44.320 ] 00:31:44.320 }' 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.320 14:00:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.888 [2024-10-09 14:00:51.192013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:44.888 [2024-10-09 14:00:51.192080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.888 [2024-10-09 14:00:51.192104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:44.888 [2024-10-09 14:00:51.192116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.888 [2024-10-09 14:00:51.192533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.888 [2024-10-09 14:00:51.192564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:44.888 [2024-10-09 14:00:51.192651] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:44.888 [2024-10-09 14:00:51.192680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:44.888 pt2 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.888 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.888 [2024-10-09 14:00:51.203979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:44.889 [2024-10-09 14:00:51.204026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.889 [2024-10-09 14:00:51.204049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:44.889 [2024-10-09 14:00:51.204060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.889 [2024-10-09 14:00:51.204413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.889 [2024-10-09 14:00:51.204430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:44.889 [2024-10-09 14:00:51.204496] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:44.889 [2024-10-09 14:00:51.204516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:44.889 [2024-10-09 14:00:51.204639] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:44.889 [2024-10-09 14:00:51.204650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:44.889 [2024-10-09 14:00:51.204892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:44.889 [2024-10-09 14:00:51.205003] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:44.889 [2024-10-09 14:00:51.205017] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:31:44.889 [2024-10-09 14:00:51.205117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.889 pt3 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.889 "name": "raid_bdev1", 00:31:44.889 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:44.889 "strip_size_kb": 0, 00:31:44.889 "state": "online", 00:31:44.889 "raid_level": "raid1", 00:31:44.889 "superblock": true, 00:31:44.889 "num_base_bdevs": 3, 00:31:44.889 "num_base_bdevs_discovered": 3, 00:31:44.889 "num_base_bdevs_operational": 3, 00:31:44.889 "base_bdevs_list": [ 00:31:44.889 { 00:31:44.889 "name": "pt1", 00:31:44.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:44.889 "is_configured": true, 00:31:44.889 "data_offset": 2048, 00:31:44.889 "data_size": 63488 00:31:44.889 }, 00:31:44.889 { 00:31:44.889 "name": "pt2", 00:31:44.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:44.889 "is_configured": true, 00:31:44.889 "data_offset": 2048, 00:31:44.889 "data_size": 63488 00:31:44.889 }, 00:31:44.889 { 00:31:44.889 "name": "pt3", 00:31:44.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:44.889 "is_configured": true, 00:31:44.889 "data_offset": 2048, 00:31:44.889 "data_size": 63488 00:31:44.889 } 00:31:44.889 ] 00:31:44.889 }' 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.889 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:45.147 [2024-10-09 14:00:51.664437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.147 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.407 "name": "raid_bdev1", 00:31:45.407 "aliases": [ 00:31:45.407 "24d81462-872e-432c-a9b9-044e6bccd25c" 00:31:45.407 ], 00:31:45.407 "product_name": "Raid Volume", 00:31:45.407 "block_size": 512, 00:31:45.407 "num_blocks": 63488, 00:31:45.407 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:45.407 "assigned_rate_limits": { 00:31:45.407 "rw_ios_per_sec": 0, 00:31:45.407 "rw_mbytes_per_sec": 0, 00:31:45.407 "r_mbytes_per_sec": 0, 00:31:45.407 "w_mbytes_per_sec": 0 00:31:45.407 }, 00:31:45.407 "claimed": false, 00:31:45.407 "zoned": false, 00:31:45.407 "supported_io_types": { 00:31:45.407 "read": true, 00:31:45.407 "write": true, 00:31:45.407 "unmap": false, 00:31:45.407 "flush": false, 00:31:45.407 "reset": true, 00:31:45.407 "nvme_admin": false, 00:31:45.407 "nvme_io": false, 00:31:45.407 "nvme_io_md": false, 00:31:45.407 "write_zeroes": true, 00:31:45.407 "zcopy": false, 00:31:45.407 "get_zone_info": false, 00:31:45.407 "zone_management": false, 00:31:45.407 "zone_append": false, 00:31:45.407 "compare": false, 00:31:45.407 "compare_and_write": false, 00:31:45.407 "abort": false, 00:31:45.407 "seek_hole": false, 00:31:45.407 "seek_data": false, 00:31:45.407 "copy": false, 00:31:45.407 "nvme_iov_md": false 00:31:45.407 }, 00:31:45.407 "memory_domains": [ 00:31:45.407 { 00:31:45.407 "dma_device_id": "system", 00:31:45.407 "dma_device_type": 1 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.407 "dma_device_type": 2 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "dma_device_id": "system", 00:31:45.407 "dma_device_type": 1 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.407 "dma_device_type": 2 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "dma_device_id": "system", 00:31:45.407 "dma_device_type": 1 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:45.407 "dma_device_type": 2 00:31:45.407 } 00:31:45.407 ], 00:31:45.407 "driver_specific": { 00:31:45.407 "raid": { 00:31:45.407 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:45.407 "strip_size_kb": 0, 00:31:45.407 "state": "online", 00:31:45.407 "raid_level": "raid1", 00:31:45.407 "superblock": true, 00:31:45.407 "num_base_bdevs": 3, 00:31:45.407 "num_base_bdevs_discovered": 3, 00:31:45.407 "num_base_bdevs_operational": 3, 00:31:45.407 "base_bdevs_list": [ 00:31:45.407 { 00:31:45.407 "name": "pt1", 00:31:45.407 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:45.407 "is_configured": true, 00:31:45.407 "data_offset": 2048, 00:31:45.407 "data_size": 63488 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "name": "pt2", 00:31:45.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:45.407 "is_configured": true, 00:31:45.407 "data_offset": 2048, 00:31:45.407 "data_size": 63488 00:31:45.407 }, 00:31:45.407 { 00:31:45.407 "name": "pt3", 00:31:45.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:45.407 "is_configured": true, 00:31:45.407 "data_offset": 2048, 00:31:45.407 "data_size": 63488 00:31:45.407 } 00:31:45.407 ] 00:31:45.407 } 00:31:45.407 } 00:31:45.407 }' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:31:45.407 pt2 00:31:45.407 pt3' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:31:45.407 [2024-10-09 14:00:51.912377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24d81462-872e-432c-a9b9-044e6bccd25c '!=' 24d81462-872e-432c-a9b9-044e6bccd25c ']' 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:31:45.407 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:45.408 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:45.408 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:31:45.408 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.408 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.666 [2024-10-09 14:00:51.960183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:45.666 "name": "raid_bdev1", 00:31:45.666 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:45.666 "strip_size_kb": 0, 00:31:45.666 "state": "online", 00:31:45.666 "raid_level": "raid1", 00:31:45.666 "superblock": true, 00:31:45.666 "num_base_bdevs": 3, 00:31:45.666 "num_base_bdevs_discovered": 2, 00:31:45.666 "num_base_bdevs_operational": 2, 00:31:45.666 "base_bdevs_list": [ 00:31:45.666 { 00:31:45.666 "name": null, 00:31:45.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.666 "is_configured": false, 00:31:45.666 "data_offset": 0, 00:31:45.666 "data_size": 63488 00:31:45.666 }, 00:31:45.666 { 00:31:45.666 "name": "pt2", 00:31:45.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:45.666 "is_configured": true, 00:31:45.666 "data_offset": 2048, 00:31:45.666 "data_size": 63488 00:31:45.666 }, 00:31:45.666 { 00:31:45.666 "name": "pt3", 00:31:45.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:45.666 "is_configured": true, 00:31:45.666 "data_offset": 2048, 00:31:45.666 "data_size": 63488 00:31:45.666 } 00:31:45.666 ] 00:31:45.666 }' 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:45.666 14:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 [2024-10-09 14:00:52.388256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:45.925 [2024-10-09 14:00:52.388294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:45.925 [2024-10-09 14:00:52.388370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.925 [2024-10-09 14:00:52.388434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.925 [2024-10-09 14:00:52.388445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.925 [2024-10-09 14:00:52.456274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:45.925 [2024-10-09 14:00:52.456335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.925 [2024-10-09 14:00:52.456361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:45.925 [2024-10-09 14:00:52.456374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.925 [2024-10-09 14:00:52.458991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.925 [2024-10-09 14:00:52.459028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:45.925 [2024-10-09 14:00:52.459111] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:45.925 [2024-10-09 14:00:52.459147] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:45.925 pt2 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.925 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.184 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.184 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:46.184 "name": "raid_bdev1", 00:31:46.184 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:46.184 "strip_size_kb": 0, 00:31:46.184 "state": "configuring", 00:31:46.184 "raid_level": "raid1", 00:31:46.184 "superblock": true, 00:31:46.184 "num_base_bdevs": 3, 00:31:46.184 "num_base_bdevs_discovered": 1, 00:31:46.184 "num_base_bdevs_operational": 2, 00:31:46.184 "base_bdevs_list": [ 00:31:46.184 { 00:31:46.184 "name": null, 00:31:46.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.184 "is_configured": false, 00:31:46.184 "data_offset": 2048, 00:31:46.184 "data_size": 63488 00:31:46.184 }, 00:31:46.184 { 00:31:46.184 "name": "pt2", 00:31:46.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:46.184 "is_configured": true, 00:31:46.184 "data_offset": 2048, 00:31:46.184 "data_size": 63488 00:31:46.184 }, 00:31:46.184 { 00:31:46.184 "name": null, 00:31:46.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:46.184 "is_configured": false, 00:31:46.184 "data_offset": 2048, 00:31:46.184 "data_size": 63488 00:31:46.184 } 00:31:46.184 ] 00:31:46.184 }' 00:31:46.184 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:46.184 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.443 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.443 [2024-10-09 14:00:52.896400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:46.443 [2024-10-09 14:00:52.896463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.443 [2024-10-09 14:00:52.896490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:46.444 [2024-10-09 14:00:52.896502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.444 [2024-10-09 14:00:52.896998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.444 [2024-10-09 14:00:52.897021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:46.444 [2024-10-09 14:00:52.897105] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:46.444 [2024-10-09 14:00:52.897130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:46.444 [2024-10-09 14:00:52.897228] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:46.444 [2024-10-09 14:00:52.897240] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:46.444 [2024-10-09 14:00:52.897528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:46.444 [2024-10-09 14:00:52.897686] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:46.444 [2024-10-09 14:00:52.897703] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:31:46.444 [2024-10-09 14:00:52.897821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.444 pt3 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:46.444 "name": "raid_bdev1", 00:31:46.444 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:46.444 "strip_size_kb": 0, 00:31:46.444 "state": "online", 00:31:46.444 "raid_level": "raid1", 00:31:46.444 "superblock": true, 00:31:46.444 "num_base_bdevs": 3, 00:31:46.444 "num_base_bdevs_discovered": 2, 00:31:46.444 "num_base_bdevs_operational": 2, 00:31:46.444 "base_bdevs_list": [ 00:31:46.444 { 00:31:46.444 "name": null, 00:31:46.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.444 "is_configured": false, 00:31:46.444 "data_offset": 2048, 00:31:46.444 "data_size": 63488 00:31:46.444 }, 00:31:46.444 { 00:31:46.444 "name": "pt2", 00:31:46.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:46.444 "is_configured": true, 00:31:46.444 "data_offset": 2048, 00:31:46.444 "data_size": 63488 00:31:46.444 }, 00:31:46.444 { 00:31:46.444 "name": "pt3", 00:31:46.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:46.444 "is_configured": true, 00:31:46.444 "data_offset": 2048, 00:31:46.444 "data_size": 63488 00:31:46.444 } 00:31:46.444 ] 00:31:46.444 }' 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:46.444 14:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 [2024-10-09 14:00:53.348496] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:47.012 [2024-10-09 14:00:53.348658] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:47.012 [2024-10-09 14:00:53.348842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:47.012 [2024-10-09 14:00:53.348994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:47.012 [2024-10-09 14:00:53.349109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 [2024-10-09 14:00:53.416467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:47.012 [2024-10-09 14:00:53.416642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.012 [2024-10-09 14:00:53.416670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:47.012 [2024-10-09 14:00:53.416685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.012 [2024-10-09 14:00:53.419214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.012 [2024-10-09 14:00:53.419257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:47.012 [2024-10-09 14:00:53.419330] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:47.012 [2024-10-09 14:00:53.419371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:47.012 [2024-10-09 14:00:53.419468] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:47.012 [2024-10-09 14:00:53.419487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:47.012 [2024-10-09 14:00:53.419508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:31:47.012 [2024-10-09 14:00:53.419567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:47.012 pt1 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.012 "name": "raid_bdev1", 00:31:47.012 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:47.012 "strip_size_kb": 0, 00:31:47.012 "state": "configuring", 00:31:47.012 "raid_level": "raid1", 00:31:47.012 "superblock": true, 00:31:47.012 "num_base_bdevs": 3, 00:31:47.012 "num_base_bdevs_discovered": 1, 00:31:47.012 "num_base_bdevs_operational": 2, 00:31:47.012 "base_bdevs_list": [ 00:31:47.012 { 00:31:47.012 "name": null, 00:31:47.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.012 "is_configured": false, 00:31:47.012 "data_offset": 2048, 00:31:47.012 "data_size": 63488 00:31:47.012 }, 00:31:47.012 { 00:31:47.012 "name": "pt2", 00:31:47.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:47.012 "is_configured": true, 00:31:47.012 "data_offset": 2048, 00:31:47.012 "data_size": 63488 00:31:47.012 }, 00:31:47.012 { 00:31:47.012 "name": null, 00:31:47.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:47.012 "is_configured": false, 00:31:47.012 "data_offset": 2048, 00:31:47.012 "data_size": 63488 00:31:47.012 } 00:31:47.012 ] 00:31:47.012 }' 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.012 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.580 [2024-10-09 14:00:53.912603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:47.580 [2024-10-09 14:00:53.912673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.580 [2024-10-09 14:00:53.912696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:47.580 [2024-10-09 14:00:53.912712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.580 [2024-10-09 14:00:53.913180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.580 [2024-10-09 14:00:53.913206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:47.580 [2024-10-09 14:00:53.913315] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:47.580 [2024-10-09 14:00:53.913365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:47.580 [2024-10-09 14:00:53.913464] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:31:47.580 [2024-10-09 14:00:53.913479] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:47.580 [2024-10-09 14:00:53.913776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:31:47.580 [2024-10-09 14:00:53.913924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:31:47.580 [2024-10-09 14:00:53.913937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:31:47.580 [2024-10-09 14:00:53.914055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:47.580 pt3 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.580 "name": "raid_bdev1", 00:31:47.580 "uuid": "24d81462-872e-432c-a9b9-044e6bccd25c", 00:31:47.580 "strip_size_kb": 0, 00:31:47.580 "state": "online", 00:31:47.580 "raid_level": "raid1", 00:31:47.580 "superblock": true, 00:31:47.580 "num_base_bdevs": 3, 00:31:47.580 "num_base_bdevs_discovered": 2, 00:31:47.580 "num_base_bdevs_operational": 2, 00:31:47.580 "base_bdevs_list": [ 00:31:47.580 { 00:31:47.580 "name": null, 00:31:47.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.580 "is_configured": false, 00:31:47.580 "data_offset": 2048, 00:31:47.580 "data_size": 63488 00:31:47.580 }, 00:31:47.580 { 00:31:47.580 "name": "pt2", 00:31:47.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:47.580 "is_configured": true, 00:31:47.580 "data_offset": 2048, 00:31:47.580 "data_size": 63488 00:31:47.580 }, 00:31:47.580 { 00:31:47.580 "name": "pt3", 00:31:47.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:47.580 "is_configured": true, 00:31:47.580 "data_offset": 2048, 00:31:47.580 "data_size": 63488 00:31:47.580 } 00:31:47.580 ] 00:31:47.580 }' 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.580 14:00:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:47.840 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:47.840 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:47.840 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.840 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.100 [2024-10-09 14:00:54.425068] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24d81462-872e-432c-a9b9-044e6bccd25c '!=' 24d81462-872e-432c-a9b9-044e6bccd25c ']' 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80008 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80008 ']' 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80008 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80008 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80008' 00:31:48.100 killing process with pid 80008 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 80008 00:31:48.100 [2024-10-09 14:00:54.512964] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:48.100 [2024-10-09 14:00:54.513056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.100 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 80008 00:31:48.100 [2024-10-09 14:00:54.513140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.100 [2024-10-09 14:00:54.513153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:31:48.100 [2024-10-09 14:00:54.549793] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:48.359 14:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:48.359 00:31:48.359 real 0m6.650s 00:31:48.359 user 0m11.247s 00:31:48.359 sys 0m1.435s 00:31:48.359 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.359 ************************************ 00:31:48.359 END TEST raid_superblock_test 00:31:48.359 ************************************ 00:31:48.359 14:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.359 14:00:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:31:48.359 14:00:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:48.359 14:00:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:48.359 14:00:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:48.359 ************************************ 00:31:48.359 START TEST raid_read_error_test 00:31:48.359 ************************************ 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N12Buh2zkt 00:31:48.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80442 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80442 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80442 ']' 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:48.359 14:00:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:48.618 [2024-10-09 14:00:54.995798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:48.618 [2024-10-09 14:00:54.996254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80442 ] 00:31:48.920 [2024-10-09 14:00:55.181908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.920 [2024-10-09 14:00:55.239518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.920 [2024-10-09 14:00:55.291431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:48.920 [2024-10-09 14:00:55.291723] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.506 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.506 BaseBdev1_malloc 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.507 true 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.507 [2024-10-09 14:00:55.983503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:49.507 [2024-10-09 14:00:55.983594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.507 [2024-10-09 14:00:55.983627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:49.507 [2024-10-09 14:00:55.983641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.507 [2024-10-09 14:00:55.986330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.507 [2024-10-09 14:00:55.986494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:49.507 BaseBdev1 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.507 14:00:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.507 BaseBdev2_malloc 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.508 true 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.508 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.508 [2024-10-09 14:00:56.035135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:49.508 [2024-10-09 14:00:56.035190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.508 [2024-10-09 14:00:56.035214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:49.508 [2024-10-09 14:00:56.035226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.508 [2024-10-09 14:00:56.038042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.508 [2024-10-09 14:00:56.038207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:49.508 BaseBdev2 00:31:49.509 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.509 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:49.509 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:49.509 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.509 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.767 BaseBdev3_malloc 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.767 true 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.767 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.767 [2024-10-09 14:00:56.076718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:49.767 [2024-10-09 14:00:56.076770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.768 [2024-10-09 14:00:56.076794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:49.768 [2024-10-09 14:00:56.076807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.768 [2024-10-09 14:00:56.079547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.768 [2024-10-09 14:00:56.079598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:49.768 BaseBdev3 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.768 [2024-10-09 14:00:56.088775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:49.768 [2024-10-09 14:00:56.091125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:49.768 [2024-10-09 14:00:56.091215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:49.768 [2024-10-09 14:00:56.091407] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:49.768 [2024-10-09 14:00:56.091426] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:49.768 [2024-10-09 14:00:56.091694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:49.768 [2024-10-09 14:00:56.091833] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:49.768 [2024-10-09 14:00:56.091845] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:31:49.768 [2024-10-09 14:00:56.091983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:49.768 "name": "raid_bdev1", 00:31:49.768 "uuid": "e6af9c8a-0036-460b-91f6-7869a2943828", 00:31:49.768 "strip_size_kb": 0, 00:31:49.768 "state": "online", 00:31:49.768 "raid_level": "raid1", 00:31:49.768 "superblock": true, 00:31:49.768 "num_base_bdevs": 3, 00:31:49.768 "num_base_bdevs_discovered": 3, 00:31:49.768 "num_base_bdevs_operational": 3, 00:31:49.768 "base_bdevs_list": [ 00:31:49.768 { 00:31:49.768 "name": "BaseBdev1", 00:31:49.768 "uuid": "ded3cc8c-c391-5fd8-afbd-5c5d71a6a72e", 00:31:49.768 "is_configured": true, 00:31:49.768 "data_offset": 2048, 00:31:49.768 "data_size": 63488 00:31:49.768 }, 00:31:49.768 { 00:31:49.768 "name": "BaseBdev2", 00:31:49.768 "uuid": "e7725199-9d07-5825-85a4-36a6fcb11a56", 00:31:49.768 "is_configured": true, 00:31:49.768 "data_offset": 2048, 00:31:49.768 "data_size": 63488 00:31:49.768 }, 00:31:49.768 { 00:31:49.768 "name": "BaseBdev3", 00:31:49.768 "uuid": "9f72ac20-0a5a-5809-887a-4342e5b25beb", 00:31:49.768 "is_configured": true, 00:31:49.768 "data_offset": 2048, 00:31:49.768 "data_size": 63488 00:31:49.768 } 00:31:49.768 ] 00:31:49.768 }' 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:49.768 14:00:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:50.026 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:50.026 14:00:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:50.284 [2024-10-09 14:00:56.657380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:51.219 "name": "raid_bdev1", 00:31:51.219 "uuid": "e6af9c8a-0036-460b-91f6-7869a2943828", 00:31:51.219 "strip_size_kb": 0, 00:31:51.219 "state": "online", 00:31:51.219 "raid_level": "raid1", 00:31:51.219 "superblock": true, 00:31:51.219 "num_base_bdevs": 3, 00:31:51.219 "num_base_bdevs_discovered": 3, 00:31:51.219 "num_base_bdevs_operational": 3, 00:31:51.219 "base_bdevs_list": [ 00:31:51.219 { 00:31:51.219 "name": "BaseBdev1", 00:31:51.219 "uuid": "ded3cc8c-c391-5fd8-afbd-5c5d71a6a72e", 00:31:51.219 "is_configured": true, 00:31:51.219 "data_offset": 2048, 00:31:51.219 "data_size": 63488 00:31:51.219 }, 00:31:51.219 { 00:31:51.219 "name": "BaseBdev2", 00:31:51.219 "uuid": "e7725199-9d07-5825-85a4-36a6fcb11a56", 00:31:51.219 "is_configured": true, 00:31:51.219 "data_offset": 2048, 00:31:51.219 "data_size": 63488 00:31:51.219 }, 00:31:51.219 { 00:31:51.219 "name": "BaseBdev3", 00:31:51.219 "uuid": "9f72ac20-0a5a-5809-887a-4342e5b25beb", 00:31:51.219 "is_configured": true, 00:31:51.219 "data_offset": 2048, 00:31:51.219 "data_size": 63488 00:31:51.219 } 00:31:51.219 ] 00:31:51.219 }' 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:51.219 14:00:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.477 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:51.477 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:51.477 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.477 [2024-10-09 14:00:58.015096] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:51.477 [2024-10-09 14:00:58.015142] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:51.477 [2024-10-09 14:00:58.018000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:51.477 [2024-10-09 14:00:58.018203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:51.477 [2024-10-09 14:00:58.018333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:51.477 [2024-10-09 14:00:58.018352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:31:51.477 { 00:31:51.477 "results": [ 00:31:51.477 { 00:31:51.478 "job": "raid_bdev1", 00:31:51.478 "core_mask": "0x1", 00:31:51.478 "workload": "randrw", 00:31:51.478 "percentage": 50, 00:31:51.478 "status": "finished", 00:31:51.478 "queue_depth": 1, 00:31:51.478 "io_size": 131072, 00:31:51.478 "runtime": 1.355225, 00:31:51.478 "iops": 13772.620782526887, 00:31:51.478 "mibps": 1721.5775978158608, 00:31:51.478 "io_failed": 0, 00:31:51.478 "io_timeout": 0, 00:31:51.478 "avg_latency_us": 69.79823525059636, 00:31:51.478 "min_latency_us": 24.502857142857142, 00:31:51.478 "max_latency_us": 1591.5885714285714 00:31:51.478 } 00:31:51.478 ], 00:31:51.478 "core_count": 1 00:31:51.478 } 00:31:51.478 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:51.478 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80442 00:31:51.478 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80442 ']' 00:31:51.478 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80442 00:31:51.478 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80442 00:31:51.736 killing process with pid 80442 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80442' 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80442 00:31:51.736 [2024-10-09 14:00:58.061867] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:51.736 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80442 00:31:51.736 [2024-10-09 14:00:58.087833] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N12Buh2zkt 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:51.994 00:31:51.994 real 0m3.481s 00:31:51.994 user 0m4.473s 00:31:51.994 sys 0m0.611s 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.994 14:00:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 ************************************ 00:31:51.994 END TEST raid_read_error_test 00:31:51.994 ************************************ 00:31:51.994 14:00:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:31:51.994 14:00:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:51.994 14:00:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.994 14:00:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:51.994 ************************************ 00:31:51.994 START TEST raid_write_error_test 00:31:51.994 ************************************ 00:31:51.994 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:31:51.994 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yXp2KgATyu 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80576 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80576 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80576 ']' 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.995 14:00:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:51.995 [2024-10-09 14:00:58.511479] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:51.995 [2024-10-09 14:00:58.511647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80576 ] 00:31:52.253 [2024-10-09 14:00:58.678206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.253 [2024-10-09 14:00:58.727052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.253 [2024-10-09 14:00:58.771712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:52.253 [2024-10-09 14:00:58.771750] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 BaseBdev1_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 true 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 [2024-10-09 14:00:59.529468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:53.188 [2024-10-09 14:00:59.529542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.188 [2024-10-09 14:00:59.529579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:53.188 [2024-10-09 14:00:59.529593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.188 [2024-10-09 14:00:59.532509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.188 [2024-10-09 14:00:59.532572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:53.188 BaseBdev1 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 BaseBdev2_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 true 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.188 [2024-10-09 14:00:59.581506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:53.188 [2024-10-09 14:00:59.581593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.188 [2024-10-09 14:00:59.581618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:53.188 [2024-10-09 14:00:59.581631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.188 [2024-10-09 14:00:59.584411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.188 [2024-10-09 14:00:59.584453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:53.188 BaseBdev2 00:31:53.188 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.189 BaseBdev3_malloc 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.189 true 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.189 [2024-10-09 14:00:59.623224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:53.189 [2024-10-09 14:00:59.623273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:53.189 [2024-10-09 14:00:59.623312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:53.189 [2024-10-09 14:00:59.623325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:53.189 [2024-10-09 14:00:59.626157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:53.189 [2024-10-09 14:00:59.626198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:53.189 BaseBdev3 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.189 [2024-10-09 14:00:59.635318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:53.189 [2024-10-09 14:00:59.637848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:53.189 [2024-10-09 14:00:59.637946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:53.189 [2024-10-09 14:00:59.638144] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:31:53.189 [2024-10-09 14:00:59.638170] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:53.189 [2024-10-09 14:00:59.638436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:53.189 [2024-10-09 14:00:59.638630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:31:53.189 [2024-10-09 14:00:59.638660] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:31:53.189 [2024-10-09 14:00:59.638808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:53.189 "name": "raid_bdev1", 00:31:53.189 "uuid": "b9aeaebd-ca75-4390-aea4-0656f63f2ed7", 00:31:53.189 "strip_size_kb": 0, 00:31:53.189 "state": "online", 00:31:53.189 "raid_level": "raid1", 00:31:53.189 "superblock": true, 00:31:53.189 "num_base_bdevs": 3, 00:31:53.189 "num_base_bdevs_discovered": 3, 00:31:53.189 "num_base_bdevs_operational": 3, 00:31:53.189 "base_bdevs_list": [ 00:31:53.189 { 00:31:53.189 "name": "BaseBdev1", 00:31:53.189 "uuid": "2f652be7-42f4-570d-861e-432c2bf23e6c", 00:31:53.189 "is_configured": true, 00:31:53.189 "data_offset": 2048, 00:31:53.189 "data_size": 63488 00:31:53.189 }, 00:31:53.189 { 00:31:53.189 "name": "BaseBdev2", 00:31:53.189 "uuid": "205b36eb-a11a-581f-96eb-bf58f80c199e", 00:31:53.189 "is_configured": true, 00:31:53.189 "data_offset": 2048, 00:31:53.189 "data_size": 63488 00:31:53.189 }, 00:31:53.189 { 00:31:53.189 "name": "BaseBdev3", 00:31:53.189 "uuid": "197d95a9-f395-509b-b90e-87993de9ebe9", 00:31:53.189 "is_configured": true, 00:31:53.189 "data_offset": 2048, 00:31:53.189 "data_size": 63488 00:31:53.189 } 00:31:53.189 ] 00:31:53.189 }' 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:53.189 14:00:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:53.755 14:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:53.755 14:01:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:53.755 [2024-10-09 14:01:00.179843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.690 [2024-10-09 14:01:01.057062] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:31:54.690 [2024-10-09 14:01:01.057127] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:54.690 [2024-10-09 14:01:01.057357] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:54.690 "name": "raid_bdev1", 00:31:54.690 "uuid": "b9aeaebd-ca75-4390-aea4-0656f63f2ed7", 00:31:54.690 "strip_size_kb": 0, 00:31:54.690 "state": "online", 00:31:54.690 "raid_level": "raid1", 00:31:54.690 "superblock": true, 00:31:54.690 "num_base_bdevs": 3, 00:31:54.690 "num_base_bdevs_discovered": 2, 00:31:54.690 "num_base_bdevs_operational": 2, 00:31:54.690 "base_bdevs_list": [ 00:31:54.690 { 00:31:54.690 "name": null, 00:31:54.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.690 "is_configured": false, 00:31:54.690 "data_offset": 0, 00:31:54.690 "data_size": 63488 00:31:54.690 }, 00:31:54.690 { 00:31:54.690 "name": "BaseBdev2", 00:31:54.690 "uuid": "205b36eb-a11a-581f-96eb-bf58f80c199e", 00:31:54.690 "is_configured": true, 00:31:54.690 "data_offset": 2048, 00:31:54.690 "data_size": 63488 00:31:54.690 }, 00:31:54.690 { 00:31:54.690 "name": "BaseBdev3", 00:31:54.690 "uuid": "197d95a9-f395-509b-b90e-87993de9ebe9", 00:31:54.690 "is_configured": true, 00:31:54.690 "data_offset": 2048, 00:31:54.690 "data_size": 63488 00:31:54.690 } 00:31:54.690 ] 00:31:54.690 }' 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:54.690 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.257 [2024-10-09 14:01:01.527954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:55.257 [2024-10-09 14:01:01.527990] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:55.257 [2024-10-09 14:01:01.530615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:55.257 [2024-10-09 14:01:01.530667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.257 [2024-10-09 14:01:01.530768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:55.257 [2024-10-09 14:01:01.530786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:31:55.257 { 00:31:55.257 "results": [ 00:31:55.257 { 00:31:55.257 "job": "raid_bdev1", 00:31:55.257 "core_mask": "0x1", 00:31:55.257 "workload": "randrw", 00:31:55.257 "percentage": 50, 00:31:55.257 "status": "finished", 00:31:55.257 "queue_depth": 1, 00:31:55.257 "io_size": 131072, 00:31:55.257 "runtime": 1.34593, 00:31:55.257 "iops": 16211.095673623442, 00:31:55.257 "mibps": 2026.3869592029303, 00:31:55.257 "io_failed": 0, 00:31:55.257 "io_timeout": 0, 00:31:55.257 "avg_latency_us": 59.04075740016892, 00:31:55.257 "min_latency_us": 23.527619047619048, 00:31:55.257 "max_latency_us": 1521.3714285714286 00:31:55.257 } 00:31:55.257 ], 00:31:55.257 "core_count": 1 00:31:55.257 } 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80576 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80576 ']' 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80576 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80576 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:55.257 killing process with pid 80576 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80576' 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80576 00:31:55.257 [2024-10-09 14:01:01.573017] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:55.257 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80576 00:31:55.257 [2024-10-09 14:01:01.598367] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yXp2KgATyu 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:55.516 00:31:55.516 real 0m3.442s 00:31:55.516 user 0m4.452s 00:31:55.516 sys 0m0.578s 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.516 14:01:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 ************************************ 00:31:55.516 END TEST raid_write_error_test 00:31:55.516 ************************************ 00:31:55.516 14:01:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:31:55.516 14:01:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:31:55.516 14:01:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:31:55.516 14:01:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:55.516 14:01:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.516 14:01:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 ************************************ 00:31:55.516 START TEST raid_state_function_test 00:31:55.516 ************************************ 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80704 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80704' 00:31:55.516 Process raid pid: 80704 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80704 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80704 ']' 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.516 14:01:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 [2024-10-09 14:01:02.029532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:31:55.516 [2024-10-09 14:01:02.029777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.775 [2024-10-09 14:01:02.210069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.775 [2024-10-09 14:01:02.256160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.775 [2024-10-09 14:01:02.300460] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:55.775 [2024-10-09 14:01:02.300513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.710 [2024-10-09 14:01:02.916008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:56.710 [2024-10-09 14:01:02.916190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:56.710 [2024-10-09 14:01:02.916222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.710 [2024-10-09 14:01:02.916237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.710 [2024-10-09 14:01:02.916245] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:56.710 [2024-10-09 14:01:02.916263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:56.710 [2024-10-09 14:01:02.916271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:56.710 [2024-10-09 14:01:02.916283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.710 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.711 "name": "Existed_Raid", 00:31:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.711 "strip_size_kb": 64, 00:31:56.711 "state": "configuring", 00:31:56.711 "raid_level": "raid0", 00:31:56.711 "superblock": false, 00:31:56.711 "num_base_bdevs": 4, 00:31:56.711 "num_base_bdevs_discovered": 0, 00:31:56.711 "num_base_bdevs_operational": 4, 00:31:56.711 "base_bdevs_list": [ 00:31:56.711 { 00:31:56.711 "name": "BaseBdev1", 00:31:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.711 "is_configured": false, 00:31:56.711 "data_offset": 0, 00:31:56.711 "data_size": 0 00:31:56.711 }, 00:31:56.711 { 00:31:56.711 "name": "BaseBdev2", 00:31:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.711 "is_configured": false, 00:31:56.711 "data_offset": 0, 00:31:56.711 "data_size": 0 00:31:56.711 }, 00:31:56.711 { 00:31:56.711 "name": "BaseBdev3", 00:31:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.711 "is_configured": false, 00:31:56.711 "data_offset": 0, 00:31:56.711 "data_size": 0 00:31:56.711 }, 00:31:56.711 { 00:31:56.711 "name": "BaseBdev4", 00:31:56.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.711 "is_configured": false, 00:31:56.711 "data_offset": 0, 00:31:56.711 "data_size": 0 00:31:56.711 } 00:31:56.711 ] 00:31:56.711 }' 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.711 14:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 [2024-10-09 14:01:03.360005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:56.970 [2024-10-09 14:01:03.360054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 [2024-10-09 14:01:03.368043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:56.970 [2024-10-09 14:01:03.368088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:56.970 [2024-10-09 14:01:03.368098] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.970 [2024-10-09 14:01:03.368111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.970 [2024-10-09 14:01:03.368119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:56.970 [2024-10-09 14:01:03.368131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:56.970 [2024-10-09 14:01:03.368138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:56.970 [2024-10-09 14:01:03.368150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 [2024-10-09 14:01:03.385549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:56.970 BaseBdev1 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.970 [ 00:31:56.970 { 00:31:56.970 "name": "BaseBdev1", 00:31:56.970 "aliases": [ 00:31:56.970 "15e2a86c-b523-4b59-8321-cb3472287ff8" 00:31:56.970 ], 00:31:56.970 "product_name": "Malloc disk", 00:31:56.970 "block_size": 512, 00:31:56.970 "num_blocks": 65536, 00:31:56.970 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:56.970 "assigned_rate_limits": { 00:31:56.970 "rw_ios_per_sec": 0, 00:31:56.970 "rw_mbytes_per_sec": 0, 00:31:56.970 "r_mbytes_per_sec": 0, 00:31:56.970 "w_mbytes_per_sec": 0 00:31:56.970 }, 00:31:56.970 "claimed": true, 00:31:56.970 "claim_type": "exclusive_write", 00:31:56.970 "zoned": false, 00:31:56.970 "supported_io_types": { 00:31:56.970 "read": true, 00:31:56.970 "write": true, 00:31:56.970 "unmap": true, 00:31:56.970 "flush": true, 00:31:56.970 "reset": true, 00:31:56.970 "nvme_admin": false, 00:31:56.970 "nvme_io": false, 00:31:56.970 "nvme_io_md": false, 00:31:56.970 "write_zeroes": true, 00:31:56.970 "zcopy": true, 00:31:56.970 "get_zone_info": false, 00:31:56.970 "zone_management": false, 00:31:56.970 "zone_append": false, 00:31:56.970 "compare": false, 00:31:56.970 "compare_and_write": false, 00:31:56.970 "abort": true, 00:31:56.970 "seek_hole": false, 00:31:56.970 "seek_data": false, 00:31:56.970 "copy": true, 00:31:56.970 "nvme_iov_md": false 00:31:56.970 }, 00:31:56.970 "memory_domains": [ 00:31:56.970 { 00:31:56.970 "dma_device_id": "system", 00:31:56.970 "dma_device_type": 1 00:31:56.970 }, 00:31:56.970 { 00:31:56.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:56.970 "dma_device_type": 2 00:31:56.970 } 00:31:56.970 ], 00:31:56.970 "driver_specific": {} 00:31:56.970 } 00:31:56.970 ] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:56.970 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:56.971 "name": "Existed_Raid", 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.971 "strip_size_kb": 64, 00:31:56.971 "state": "configuring", 00:31:56.971 "raid_level": "raid0", 00:31:56.971 "superblock": false, 00:31:56.971 "num_base_bdevs": 4, 00:31:56.971 "num_base_bdevs_discovered": 1, 00:31:56.971 "num_base_bdevs_operational": 4, 00:31:56.971 "base_bdevs_list": [ 00:31:56.971 { 00:31:56.971 "name": "BaseBdev1", 00:31:56.971 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:56.971 "is_configured": true, 00:31:56.971 "data_offset": 0, 00:31:56.971 "data_size": 65536 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": "BaseBdev2", 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 0, 00:31:56.971 "data_size": 0 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": "BaseBdev3", 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 0, 00:31:56.971 "data_size": 0 00:31:56.971 }, 00:31:56.971 { 00:31:56.971 "name": "BaseBdev4", 00:31:56.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.971 "is_configured": false, 00:31:56.971 "data_offset": 0, 00:31:56.971 "data_size": 0 00:31:56.971 } 00:31:56.971 ] 00:31:56.971 }' 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:56.971 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.539 [2024-10-09 14:01:03.845713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:57.539 [2024-10-09 14:01:03.845766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.539 [2024-10-09 14:01:03.853747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:57.539 [2024-10-09 14:01:03.855971] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:57.539 [2024-10-09 14:01:03.856141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:57.539 [2024-10-09 14:01:03.856162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:57.539 [2024-10-09 14:01:03.856175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:57.539 [2024-10-09 14:01:03.856184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:31:57.539 [2024-10-09 14:01:03.856195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.539 "name": "Existed_Raid", 00:31:57.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.539 "strip_size_kb": 64, 00:31:57.539 "state": "configuring", 00:31:57.539 "raid_level": "raid0", 00:31:57.539 "superblock": false, 00:31:57.539 "num_base_bdevs": 4, 00:31:57.539 "num_base_bdevs_discovered": 1, 00:31:57.539 "num_base_bdevs_operational": 4, 00:31:57.539 "base_bdevs_list": [ 00:31:57.539 { 00:31:57.539 "name": "BaseBdev1", 00:31:57.539 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:57.539 "is_configured": true, 00:31:57.539 "data_offset": 0, 00:31:57.539 "data_size": 65536 00:31:57.539 }, 00:31:57.539 { 00:31:57.539 "name": "BaseBdev2", 00:31:57.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.539 "is_configured": false, 00:31:57.539 "data_offset": 0, 00:31:57.539 "data_size": 0 00:31:57.539 }, 00:31:57.539 { 00:31:57.539 "name": "BaseBdev3", 00:31:57.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.539 "is_configured": false, 00:31:57.539 "data_offset": 0, 00:31:57.539 "data_size": 0 00:31:57.539 }, 00:31:57.539 { 00:31:57.539 "name": "BaseBdev4", 00:31:57.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.539 "is_configured": false, 00:31:57.539 "data_offset": 0, 00:31:57.539 "data_size": 0 00:31:57.539 } 00:31:57.539 ] 00:31:57.539 }' 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.539 14:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.801 [2024-10-09 14:01:04.329471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:57.801 BaseBdev2 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.801 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.065 [ 00:31:58.065 { 00:31:58.065 "name": "BaseBdev2", 00:31:58.065 "aliases": [ 00:31:58.065 "65570ab1-690c-43c3-84d2-eee4e3ea4977" 00:31:58.065 ], 00:31:58.065 "product_name": "Malloc disk", 00:31:58.065 "block_size": 512, 00:31:58.065 "num_blocks": 65536, 00:31:58.065 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:58.065 "assigned_rate_limits": { 00:31:58.065 "rw_ios_per_sec": 0, 00:31:58.065 "rw_mbytes_per_sec": 0, 00:31:58.065 "r_mbytes_per_sec": 0, 00:31:58.065 "w_mbytes_per_sec": 0 00:31:58.065 }, 00:31:58.065 "claimed": true, 00:31:58.065 "claim_type": "exclusive_write", 00:31:58.065 "zoned": false, 00:31:58.065 "supported_io_types": { 00:31:58.065 "read": true, 00:31:58.065 "write": true, 00:31:58.065 "unmap": true, 00:31:58.065 "flush": true, 00:31:58.065 "reset": true, 00:31:58.065 "nvme_admin": false, 00:31:58.065 "nvme_io": false, 00:31:58.065 "nvme_io_md": false, 00:31:58.065 "write_zeroes": true, 00:31:58.065 "zcopy": true, 00:31:58.065 "get_zone_info": false, 00:31:58.065 "zone_management": false, 00:31:58.065 "zone_append": false, 00:31:58.065 "compare": false, 00:31:58.065 "compare_and_write": false, 00:31:58.065 "abort": true, 00:31:58.065 "seek_hole": false, 00:31:58.065 "seek_data": false, 00:31:58.065 "copy": true, 00:31:58.065 "nvme_iov_md": false 00:31:58.065 }, 00:31:58.065 "memory_domains": [ 00:31:58.065 { 00:31:58.065 "dma_device_id": "system", 00:31:58.065 "dma_device_type": 1 00:31:58.065 }, 00:31:58.065 { 00:31:58.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.065 "dma_device_type": 2 00:31:58.065 } 00:31:58.065 ], 00:31:58.065 "driver_specific": {} 00:31:58.065 } 00:31:58.065 ] 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:58.065 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.066 "name": "Existed_Raid", 00:31:58.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.066 "strip_size_kb": 64, 00:31:58.066 "state": "configuring", 00:31:58.066 "raid_level": "raid0", 00:31:58.066 "superblock": false, 00:31:58.066 "num_base_bdevs": 4, 00:31:58.066 "num_base_bdevs_discovered": 2, 00:31:58.066 "num_base_bdevs_operational": 4, 00:31:58.066 "base_bdevs_list": [ 00:31:58.066 { 00:31:58.066 "name": "BaseBdev1", 00:31:58.066 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:58.066 "is_configured": true, 00:31:58.066 "data_offset": 0, 00:31:58.066 "data_size": 65536 00:31:58.066 }, 00:31:58.066 { 00:31:58.066 "name": "BaseBdev2", 00:31:58.066 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:58.066 "is_configured": true, 00:31:58.066 "data_offset": 0, 00:31:58.066 "data_size": 65536 00:31:58.066 }, 00:31:58.066 { 00:31:58.066 "name": "BaseBdev3", 00:31:58.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.066 "is_configured": false, 00:31:58.066 "data_offset": 0, 00:31:58.066 "data_size": 0 00:31:58.066 }, 00:31:58.066 { 00:31:58.066 "name": "BaseBdev4", 00:31:58.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.066 "is_configured": false, 00:31:58.066 "data_offset": 0, 00:31:58.066 "data_size": 0 00:31:58.066 } 00:31:58.066 ] 00:31:58.066 }' 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.066 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.325 [2024-10-09 14:01:04.821002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:58.325 BaseBdev3 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.325 [ 00:31:58.325 { 00:31:58.325 "name": "BaseBdev3", 00:31:58.325 "aliases": [ 00:31:58.325 "29231081-a676-47ec-8ec4-4ad3b685442d" 00:31:58.325 ], 00:31:58.325 "product_name": "Malloc disk", 00:31:58.325 "block_size": 512, 00:31:58.325 "num_blocks": 65536, 00:31:58.325 "uuid": "29231081-a676-47ec-8ec4-4ad3b685442d", 00:31:58.325 "assigned_rate_limits": { 00:31:58.325 "rw_ios_per_sec": 0, 00:31:58.325 "rw_mbytes_per_sec": 0, 00:31:58.325 "r_mbytes_per_sec": 0, 00:31:58.325 "w_mbytes_per_sec": 0 00:31:58.325 }, 00:31:58.325 "claimed": true, 00:31:58.325 "claim_type": "exclusive_write", 00:31:58.325 "zoned": false, 00:31:58.325 "supported_io_types": { 00:31:58.325 "read": true, 00:31:58.325 "write": true, 00:31:58.325 "unmap": true, 00:31:58.325 "flush": true, 00:31:58.325 "reset": true, 00:31:58.325 "nvme_admin": false, 00:31:58.325 "nvme_io": false, 00:31:58.325 "nvme_io_md": false, 00:31:58.325 "write_zeroes": true, 00:31:58.325 "zcopy": true, 00:31:58.325 "get_zone_info": false, 00:31:58.325 "zone_management": false, 00:31:58.325 "zone_append": false, 00:31:58.325 "compare": false, 00:31:58.325 "compare_and_write": false, 00:31:58.325 "abort": true, 00:31:58.325 "seek_hole": false, 00:31:58.325 "seek_data": false, 00:31:58.325 "copy": true, 00:31:58.325 "nvme_iov_md": false 00:31:58.325 }, 00:31:58.325 "memory_domains": [ 00:31:58.325 { 00:31:58.325 "dma_device_id": "system", 00:31:58.325 "dma_device_type": 1 00:31:58.325 }, 00:31:58.325 { 00:31:58.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.325 "dma_device_type": 2 00:31:58.325 } 00:31:58.325 ], 00:31:58.325 "driver_specific": {} 00:31:58.325 } 00:31:58.325 ] 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.325 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.584 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.584 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:58.584 "name": "Existed_Raid", 00:31:58.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.584 "strip_size_kb": 64, 00:31:58.584 "state": "configuring", 00:31:58.584 "raid_level": "raid0", 00:31:58.584 "superblock": false, 00:31:58.584 "num_base_bdevs": 4, 00:31:58.584 "num_base_bdevs_discovered": 3, 00:31:58.584 "num_base_bdevs_operational": 4, 00:31:58.584 "base_bdevs_list": [ 00:31:58.584 { 00:31:58.584 "name": "BaseBdev1", 00:31:58.584 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:58.584 "is_configured": true, 00:31:58.584 "data_offset": 0, 00:31:58.584 "data_size": 65536 00:31:58.584 }, 00:31:58.584 { 00:31:58.584 "name": "BaseBdev2", 00:31:58.584 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:58.584 "is_configured": true, 00:31:58.584 "data_offset": 0, 00:31:58.584 "data_size": 65536 00:31:58.584 }, 00:31:58.584 { 00:31:58.584 "name": "BaseBdev3", 00:31:58.584 "uuid": "29231081-a676-47ec-8ec4-4ad3b685442d", 00:31:58.584 "is_configured": true, 00:31:58.584 "data_offset": 0, 00:31:58.584 "data_size": 65536 00:31:58.584 }, 00:31:58.584 { 00:31:58.584 "name": "BaseBdev4", 00:31:58.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.584 "is_configured": false, 00:31:58.584 "data_offset": 0, 00:31:58.584 "data_size": 0 00:31:58.584 } 00:31:58.584 ] 00:31:58.584 }' 00:31:58.584 14:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:58.584 14:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.843 [2024-10-09 14:01:05.332310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:58.843 [2024-10-09 14:01:05.332355] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:31:58.843 [2024-10-09 14:01:05.332366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:31:58.843 [2024-10-09 14:01:05.332695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:58.843 [2024-10-09 14:01:05.332844] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:31:58.843 [2024-10-09 14:01:05.332865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:31:58.843 [2024-10-09 14:01:05.333068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.843 BaseBdev4 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.843 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.843 [ 00:31:58.843 { 00:31:58.843 "name": "BaseBdev4", 00:31:58.844 "aliases": [ 00:31:58.844 "9ba15b7c-8349-44e3-93d9-d3af92212dd6" 00:31:58.844 ], 00:31:58.844 "product_name": "Malloc disk", 00:31:58.844 "block_size": 512, 00:31:58.844 "num_blocks": 65536, 00:31:58.844 "uuid": "9ba15b7c-8349-44e3-93d9-d3af92212dd6", 00:31:58.844 "assigned_rate_limits": { 00:31:58.844 "rw_ios_per_sec": 0, 00:31:58.844 "rw_mbytes_per_sec": 0, 00:31:58.844 "r_mbytes_per_sec": 0, 00:31:58.844 "w_mbytes_per_sec": 0 00:31:58.844 }, 00:31:58.844 "claimed": true, 00:31:58.844 "claim_type": "exclusive_write", 00:31:58.844 "zoned": false, 00:31:58.844 "supported_io_types": { 00:31:58.844 "read": true, 00:31:58.844 "write": true, 00:31:58.844 "unmap": true, 00:31:58.844 "flush": true, 00:31:58.844 "reset": true, 00:31:58.844 "nvme_admin": false, 00:31:58.844 "nvme_io": false, 00:31:58.844 "nvme_io_md": false, 00:31:58.844 "write_zeroes": true, 00:31:58.844 "zcopy": true, 00:31:58.844 "get_zone_info": false, 00:31:58.844 "zone_management": false, 00:31:58.844 "zone_append": false, 00:31:58.844 "compare": false, 00:31:58.844 "compare_and_write": false, 00:31:58.844 "abort": true, 00:31:58.844 "seek_hole": false, 00:31:58.844 "seek_data": false, 00:31:58.844 "copy": true, 00:31:58.844 "nvme_iov_md": false 00:31:58.844 }, 00:31:58.844 "memory_domains": [ 00:31:58.844 { 00:31:58.844 "dma_device_id": "system", 00:31:58.844 "dma_device_type": 1 00:31:58.844 }, 00:31:58.844 { 00:31:58.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.844 "dma_device_type": 2 00:31:58.844 } 00:31:58.844 ], 00:31:58.844 "driver_specific": {} 00:31:58.844 } 00:31:58.844 ] 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:58.844 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.103 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.103 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.103 "name": "Existed_Raid", 00:31:59.103 "uuid": "22e2d6ec-4302-4f82-9d3a-c729ca9a33f7", 00:31:59.103 "strip_size_kb": 64, 00:31:59.103 "state": "online", 00:31:59.103 "raid_level": "raid0", 00:31:59.103 "superblock": false, 00:31:59.103 "num_base_bdevs": 4, 00:31:59.103 "num_base_bdevs_discovered": 4, 00:31:59.103 "num_base_bdevs_operational": 4, 00:31:59.103 "base_bdevs_list": [ 00:31:59.103 { 00:31:59.103 "name": "BaseBdev1", 00:31:59.103 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:59.103 "is_configured": true, 00:31:59.103 "data_offset": 0, 00:31:59.103 "data_size": 65536 00:31:59.103 }, 00:31:59.103 { 00:31:59.103 "name": "BaseBdev2", 00:31:59.103 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:59.103 "is_configured": true, 00:31:59.103 "data_offset": 0, 00:31:59.103 "data_size": 65536 00:31:59.103 }, 00:31:59.103 { 00:31:59.103 "name": "BaseBdev3", 00:31:59.103 "uuid": "29231081-a676-47ec-8ec4-4ad3b685442d", 00:31:59.103 "is_configured": true, 00:31:59.103 "data_offset": 0, 00:31:59.103 "data_size": 65536 00:31:59.103 }, 00:31:59.103 { 00:31:59.103 "name": "BaseBdev4", 00:31:59.103 "uuid": "9ba15b7c-8349-44e3-93d9-d3af92212dd6", 00:31:59.103 "is_configured": true, 00:31:59.103 "data_offset": 0, 00:31:59.103 "data_size": 65536 00:31:59.103 } 00:31:59.103 ] 00:31:59.103 }' 00:31:59.103 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.103 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.363 [2024-10-09 14:01:05.756799] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.363 "name": "Existed_Raid", 00:31:59.363 "aliases": [ 00:31:59.363 "22e2d6ec-4302-4f82-9d3a-c729ca9a33f7" 00:31:59.363 ], 00:31:59.363 "product_name": "Raid Volume", 00:31:59.363 "block_size": 512, 00:31:59.363 "num_blocks": 262144, 00:31:59.363 "uuid": "22e2d6ec-4302-4f82-9d3a-c729ca9a33f7", 00:31:59.363 "assigned_rate_limits": { 00:31:59.363 "rw_ios_per_sec": 0, 00:31:59.363 "rw_mbytes_per_sec": 0, 00:31:59.363 "r_mbytes_per_sec": 0, 00:31:59.363 "w_mbytes_per_sec": 0 00:31:59.363 }, 00:31:59.363 "claimed": false, 00:31:59.363 "zoned": false, 00:31:59.363 "supported_io_types": { 00:31:59.363 "read": true, 00:31:59.363 "write": true, 00:31:59.363 "unmap": true, 00:31:59.363 "flush": true, 00:31:59.363 "reset": true, 00:31:59.363 "nvme_admin": false, 00:31:59.363 "nvme_io": false, 00:31:59.363 "nvme_io_md": false, 00:31:59.363 "write_zeroes": true, 00:31:59.363 "zcopy": false, 00:31:59.363 "get_zone_info": false, 00:31:59.363 "zone_management": false, 00:31:59.363 "zone_append": false, 00:31:59.363 "compare": false, 00:31:59.363 "compare_and_write": false, 00:31:59.363 "abort": false, 00:31:59.363 "seek_hole": false, 00:31:59.363 "seek_data": false, 00:31:59.363 "copy": false, 00:31:59.363 "nvme_iov_md": false 00:31:59.363 }, 00:31:59.363 "memory_domains": [ 00:31:59.363 { 00:31:59.363 "dma_device_id": "system", 00:31:59.363 "dma_device_type": 1 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.363 "dma_device_type": 2 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "system", 00:31:59.363 "dma_device_type": 1 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.363 "dma_device_type": 2 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "system", 00:31:59.363 "dma_device_type": 1 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.363 "dma_device_type": 2 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "system", 00:31:59.363 "dma_device_type": 1 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.363 "dma_device_type": 2 00:31:59.363 } 00:31:59.363 ], 00:31:59.363 "driver_specific": { 00:31:59.363 "raid": { 00:31:59.363 "uuid": "22e2d6ec-4302-4f82-9d3a-c729ca9a33f7", 00:31:59.363 "strip_size_kb": 64, 00:31:59.363 "state": "online", 00:31:59.363 "raid_level": "raid0", 00:31:59.363 "superblock": false, 00:31:59.363 "num_base_bdevs": 4, 00:31:59.363 "num_base_bdevs_discovered": 4, 00:31:59.363 "num_base_bdevs_operational": 4, 00:31:59.363 "base_bdevs_list": [ 00:31:59.363 { 00:31:59.363 "name": "BaseBdev1", 00:31:59.363 "uuid": "15e2a86c-b523-4b59-8321-cb3472287ff8", 00:31:59.363 "is_configured": true, 00:31:59.363 "data_offset": 0, 00:31:59.363 "data_size": 65536 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "name": "BaseBdev2", 00:31:59.363 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:59.363 "is_configured": true, 00:31:59.363 "data_offset": 0, 00:31:59.363 "data_size": 65536 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "name": "BaseBdev3", 00:31:59.363 "uuid": "29231081-a676-47ec-8ec4-4ad3b685442d", 00:31:59.363 "is_configured": true, 00:31:59.363 "data_offset": 0, 00:31:59.363 "data_size": 65536 00:31:59.363 }, 00:31:59.363 { 00:31:59.363 "name": "BaseBdev4", 00:31:59.363 "uuid": "9ba15b7c-8349-44e3-93d9-d3af92212dd6", 00:31:59.363 "is_configured": true, 00:31:59.363 "data_offset": 0, 00:31:59.363 "data_size": 65536 00:31:59.363 } 00:31:59.363 ] 00:31:59.363 } 00:31:59.363 } 00:31:59.363 }' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:31:59.363 BaseBdev2 00:31:59.363 BaseBdev3 00:31:59.363 BaseBdev4' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.363 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:59.622 14:01:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.622 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:59.622 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:59.622 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.623 [2024-10-09 14:01:06.068563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:59.623 [2024-10-09 14:01:06.068595] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:59.623 [2024-10-09 14:01:06.068654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.623 "name": "Existed_Raid", 00:31:59.623 "uuid": "22e2d6ec-4302-4f82-9d3a-c729ca9a33f7", 00:31:59.623 "strip_size_kb": 64, 00:31:59.623 "state": "offline", 00:31:59.623 "raid_level": "raid0", 00:31:59.623 "superblock": false, 00:31:59.623 "num_base_bdevs": 4, 00:31:59.623 "num_base_bdevs_discovered": 3, 00:31:59.623 "num_base_bdevs_operational": 3, 00:31:59.623 "base_bdevs_list": [ 00:31:59.623 { 00:31:59.623 "name": null, 00:31:59.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.623 "is_configured": false, 00:31:59.623 "data_offset": 0, 00:31:59.623 "data_size": 65536 00:31:59.623 }, 00:31:59.623 { 00:31:59.623 "name": "BaseBdev2", 00:31:59.623 "uuid": "65570ab1-690c-43c3-84d2-eee4e3ea4977", 00:31:59.623 "is_configured": true, 00:31:59.623 "data_offset": 0, 00:31:59.623 "data_size": 65536 00:31:59.623 }, 00:31:59.623 { 00:31:59.623 "name": "BaseBdev3", 00:31:59.623 "uuid": "29231081-a676-47ec-8ec4-4ad3b685442d", 00:31:59.623 "is_configured": true, 00:31:59.623 "data_offset": 0, 00:31:59.623 "data_size": 65536 00:31:59.623 }, 00:31:59.623 { 00:31:59.623 "name": "BaseBdev4", 00:31:59.623 "uuid": "9ba15b7c-8349-44e3-93d9-d3af92212dd6", 00:31:59.623 "is_configured": true, 00:31:59.623 "data_offset": 0, 00:31:59.623 "data_size": 65536 00:31:59.623 } 00:31:59.623 ] 00:31:59.623 }' 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.623 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 [2024-10-09 14:01:06.565237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.191 [2024-10-09 14:01:06.629387] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.191 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.192 [2024-10-09 14:01:06.697343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:00.192 [2024-10-09 14:01:06.697393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.192 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.452 BaseBdev2 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.452 [ 00:32:00.452 { 00:32:00.452 "name": "BaseBdev2", 00:32:00.452 "aliases": [ 00:32:00.452 "3a26df59-9201-48ff-a0db-7f7d005c4350" 00:32:00.452 ], 00:32:00.452 "product_name": "Malloc disk", 00:32:00.452 "block_size": 512, 00:32:00.452 "num_blocks": 65536, 00:32:00.452 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:00.452 "assigned_rate_limits": { 00:32:00.452 "rw_ios_per_sec": 0, 00:32:00.452 "rw_mbytes_per_sec": 0, 00:32:00.452 "r_mbytes_per_sec": 0, 00:32:00.452 "w_mbytes_per_sec": 0 00:32:00.452 }, 00:32:00.452 "claimed": false, 00:32:00.452 "zoned": false, 00:32:00.452 "supported_io_types": { 00:32:00.452 "read": true, 00:32:00.452 "write": true, 00:32:00.452 "unmap": true, 00:32:00.452 "flush": true, 00:32:00.452 "reset": true, 00:32:00.452 "nvme_admin": false, 00:32:00.452 "nvme_io": false, 00:32:00.452 "nvme_io_md": false, 00:32:00.452 "write_zeroes": true, 00:32:00.452 "zcopy": true, 00:32:00.452 "get_zone_info": false, 00:32:00.452 "zone_management": false, 00:32:00.452 "zone_append": false, 00:32:00.452 "compare": false, 00:32:00.452 "compare_and_write": false, 00:32:00.452 "abort": true, 00:32:00.452 "seek_hole": false, 00:32:00.452 "seek_data": false, 00:32:00.452 "copy": true, 00:32:00.452 "nvme_iov_md": false 00:32:00.452 }, 00:32:00.452 "memory_domains": [ 00:32:00.452 { 00:32:00.452 "dma_device_id": "system", 00:32:00.452 "dma_device_type": 1 00:32:00.452 }, 00:32:00.452 { 00:32:00.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.452 "dma_device_type": 2 00:32:00.452 } 00:32:00.452 ], 00:32:00.452 "driver_specific": {} 00:32:00.452 } 00:32:00.452 ] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.452 BaseBdev3 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.452 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 [ 00:32:00.453 { 00:32:00.453 "name": "BaseBdev3", 00:32:00.453 "aliases": [ 00:32:00.453 "f5906261-98de-46cd-beb9-aa06b1b9e899" 00:32:00.453 ], 00:32:00.453 "product_name": "Malloc disk", 00:32:00.453 "block_size": 512, 00:32:00.453 "num_blocks": 65536, 00:32:00.453 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:00.453 "assigned_rate_limits": { 00:32:00.453 "rw_ios_per_sec": 0, 00:32:00.453 "rw_mbytes_per_sec": 0, 00:32:00.453 "r_mbytes_per_sec": 0, 00:32:00.453 "w_mbytes_per_sec": 0 00:32:00.453 }, 00:32:00.453 "claimed": false, 00:32:00.453 "zoned": false, 00:32:00.453 "supported_io_types": { 00:32:00.453 "read": true, 00:32:00.453 "write": true, 00:32:00.453 "unmap": true, 00:32:00.453 "flush": true, 00:32:00.453 "reset": true, 00:32:00.453 "nvme_admin": false, 00:32:00.453 "nvme_io": false, 00:32:00.453 "nvme_io_md": false, 00:32:00.453 "write_zeroes": true, 00:32:00.453 "zcopy": true, 00:32:00.453 "get_zone_info": false, 00:32:00.453 "zone_management": false, 00:32:00.453 "zone_append": false, 00:32:00.453 "compare": false, 00:32:00.453 "compare_and_write": false, 00:32:00.453 "abort": true, 00:32:00.453 "seek_hole": false, 00:32:00.453 "seek_data": false, 00:32:00.453 "copy": true, 00:32:00.453 "nvme_iov_md": false 00:32:00.453 }, 00:32:00.453 "memory_domains": [ 00:32:00.453 { 00:32:00.453 "dma_device_id": "system", 00:32:00.453 "dma_device_type": 1 00:32:00.453 }, 00:32:00.453 { 00:32:00.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.453 "dma_device_type": 2 00:32:00.453 } 00:32:00.453 ], 00:32:00.453 "driver_specific": {} 00:32:00.453 } 00:32:00.453 ] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 BaseBdev4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 [ 00:32:00.453 { 00:32:00.453 "name": "BaseBdev4", 00:32:00.453 "aliases": [ 00:32:00.453 "cb025353-1d41-4542-841c-60918fb098fd" 00:32:00.453 ], 00:32:00.453 "product_name": "Malloc disk", 00:32:00.453 "block_size": 512, 00:32:00.453 "num_blocks": 65536, 00:32:00.453 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:00.453 "assigned_rate_limits": { 00:32:00.453 "rw_ios_per_sec": 0, 00:32:00.453 "rw_mbytes_per_sec": 0, 00:32:00.453 "r_mbytes_per_sec": 0, 00:32:00.453 "w_mbytes_per_sec": 0 00:32:00.453 }, 00:32:00.453 "claimed": false, 00:32:00.453 "zoned": false, 00:32:00.453 "supported_io_types": { 00:32:00.453 "read": true, 00:32:00.453 "write": true, 00:32:00.453 "unmap": true, 00:32:00.453 "flush": true, 00:32:00.453 "reset": true, 00:32:00.453 "nvme_admin": false, 00:32:00.453 "nvme_io": false, 00:32:00.453 "nvme_io_md": false, 00:32:00.453 "write_zeroes": true, 00:32:00.453 "zcopy": true, 00:32:00.453 "get_zone_info": false, 00:32:00.453 "zone_management": false, 00:32:00.453 "zone_append": false, 00:32:00.453 "compare": false, 00:32:00.453 "compare_and_write": false, 00:32:00.453 "abort": true, 00:32:00.453 "seek_hole": false, 00:32:00.453 "seek_data": false, 00:32:00.453 "copy": true, 00:32:00.453 "nvme_iov_md": false 00:32:00.453 }, 00:32:00.453 "memory_domains": [ 00:32:00.453 { 00:32:00.453 "dma_device_id": "system", 00:32:00.453 "dma_device_type": 1 00:32:00.453 }, 00:32:00.453 { 00:32:00.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.453 "dma_device_type": 2 00:32:00.453 } 00:32:00.453 ], 00:32:00.453 "driver_specific": {} 00:32:00.453 } 00:32:00.453 ] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 [2024-10-09 14:01:06.883391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:00.453 [2024-10-09 14:01:06.883440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:00.453 [2024-10-09 14:01:06.883463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:00.453 [2024-10-09 14:01:06.885652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:00.453 [2024-10-09 14:01:06.885717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.453 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:00.453 "name": "Existed_Raid", 00:32:00.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.453 "strip_size_kb": 64, 00:32:00.453 "state": "configuring", 00:32:00.454 "raid_level": "raid0", 00:32:00.454 "superblock": false, 00:32:00.454 "num_base_bdevs": 4, 00:32:00.454 "num_base_bdevs_discovered": 3, 00:32:00.454 "num_base_bdevs_operational": 4, 00:32:00.454 "base_bdevs_list": [ 00:32:00.454 { 00:32:00.454 "name": "BaseBdev1", 00:32:00.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:00.454 "is_configured": false, 00:32:00.454 "data_offset": 0, 00:32:00.454 "data_size": 0 00:32:00.454 }, 00:32:00.454 { 00:32:00.454 "name": "BaseBdev2", 00:32:00.454 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:00.454 "is_configured": true, 00:32:00.454 "data_offset": 0, 00:32:00.454 "data_size": 65536 00:32:00.454 }, 00:32:00.454 { 00:32:00.454 "name": "BaseBdev3", 00:32:00.454 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:00.454 "is_configured": true, 00:32:00.454 "data_offset": 0, 00:32:00.454 "data_size": 65536 00:32:00.454 }, 00:32:00.454 { 00:32:00.454 "name": "BaseBdev4", 00:32:00.454 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:00.454 "is_configured": true, 00:32:00.454 "data_offset": 0, 00:32:00.454 "data_size": 65536 00:32:00.454 } 00:32:00.454 ] 00:32:00.454 }' 00:32:00.454 14:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:00.454 14:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.022 [2024-10-09 14:01:07.351478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.022 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.023 "name": "Existed_Raid", 00:32:01.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.023 "strip_size_kb": 64, 00:32:01.023 "state": "configuring", 00:32:01.023 "raid_level": "raid0", 00:32:01.023 "superblock": false, 00:32:01.023 "num_base_bdevs": 4, 00:32:01.023 "num_base_bdevs_discovered": 2, 00:32:01.023 "num_base_bdevs_operational": 4, 00:32:01.023 "base_bdevs_list": [ 00:32:01.023 { 00:32:01.023 "name": "BaseBdev1", 00:32:01.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.023 "is_configured": false, 00:32:01.023 "data_offset": 0, 00:32:01.023 "data_size": 0 00:32:01.023 }, 00:32:01.023 { 00:32:01.023 "name": null, 00:32:01.023 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:01.023 "is_configured": false, 00:32:01.023 "data_offset": 0, 00:32:01.023 "data_size": 65536 00:32:01.023 }, 00:32:01.023 { 00:32:01.023 "name": "BaseBdev3", 00:32:01.023 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:01.023 "is_configured": true, 00:32:01.023 "data_offset": 0, 00:32:01.023 "data_size": 65536 00:32:01.023 }, 00:32:01.023 { 00:32:01.023 "name": "BaseBdev4", 00:32:01.023 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:01.023 "is_configured": true, 00:32:01.023 "data_offset": 0, 00:32:01.023 "data_size": 65536 00:32:01.023 } 00:32:01.023 ] 00:32:01.023 }' 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.023 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.282 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.282 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.282 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.282 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.541 [2024-10-09 14:01:07.870708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:01.541 BaseBdev1 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.541 [ 00:32:01.541 { 00:32:01.541 "name": "BaseBdev1", 00:32:01.541 "aliases": [ 00:32:01.541 "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1" 00:32:01.541 ], 00:32:01.541 "product_name": "Malloc disk", 00:32:01.541 "block_size": 512, 00:32:01.541 "num_blocks": 65536, 00:32:01.541 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:01.541 "assigned_rate_limits": { 00:32:01.541 "rw_ios_per_sec": 0, 00:32:01.541 "rw_mbytes_per_sec": 0, 00:32:01.541 "r_mbytes_per_sec": 0, 00:32:01.541 "w_mbytes_per_sec": 0 00:32:01.541 }, 00:32:01.541 "claimed": true, 00:32:01.541 "claim_type": "exclusive_write", 00:32:01.541 "zoned": false, 00:32:01.541 "supported_io_types": { 00:32:01.541 "read": true, 00:32:01.541 "write": true, 00:32:01.541 "unmap": true, 00:32:01.541 "flush": true, 00:32:01.541 "reset": true, 00:32:01.541 "nvme_admin": false, 00:32:01.541 "nvme_io": false, 00:32:01.541 "nvme_io_md": false, 00:32:01.541 "write_zeroes": true, 00:32:01.541 "zcopy": true, 00:32:01.541 "get_zone_info": false, 00:32:01.541 "zone_management": false, 00:32:01.541 "zone_append": false, 00:32:01.541 "compare": false, 00:32:01.541 "compare_and_write": false, 00:32:01.541 "abort": true, 00:32:01.541 "seek_hole": false, 00:32:01.541 "seek_data": false, 00:32:01.541 "copy": true, 00:32:01.541 "nvme_iov_md": false 00:32:01.541 }, 00:32:01.541 "memory_domains": [ 00:32:01.541 { 00:32:01.541 "dma_device_id": "system", 00:32:01.541 "dma_device_type": 1 00:32:01.541 }, 00:32:01.541 { 00:32:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.541 "dma_device_type": 2 00:32:01.541 } 00:32:01.541 ], 00:32:01.541 "driver_specific": {} 00:32:01.541 } 00:32:01.541 ] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:01.541 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:01.542 "name": "Existed_Raid", 00:32:01.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.542 "strip_size_kb": 64, 00:32:01.542 "state": "configuring", 00:32:01.542 "raid_level": "raid0", 00:32:01.542 "superblock": false, 00:32:01.542 "num_base_bdevs": 4, 00:32:01.542 "num_base_bdevs_discovered": 3, 00:32:01.542 "num_base_bdevs_operational": 4, 00:32:01.542 "base_bdevs_list": [ 00:32:01.542 { 00:32:01.542 "name": "BaseBdev1", 00:32:01.542 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:01.542 "is_configured": true, 00:32:01.542 "data_offset": 0, 00:32:01.542 "data_size": 65536 00:32:01.542 }, 00:32:01.542 { 00:32:01.542 "name": null, 00:32:01.542 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:01.542 "is_configured": false, 00:32:01.542 "data_offset": 0, 00:32:01.542 "data_size": 65536 00:32:01.542 }, 00:32:01.542 { 00:32:01.542 "name": "BaseBdev3", 00:32:01.542 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:01.542 "is_configured": true, 00:32:01.542 "data_offset": 0, 00:32:01.542 "data_size": 65536 00:32:01.542 }, 00:32:01.542 { 00:32:01.542 "name": "BaseBdev4", 00:32:01.542 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:01.542 "is_configured": true, 00:32:01.542 "data_offset": 0, 00:32:01.542 "data_size": 65536 00:32:01.542 } 00:32:01.542 ] 00:32:01.542 }' 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:01.542 14:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.800 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.800 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:01.800 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:01.800 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:01.800 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.059 [2024-10-09 14:01:08.378851] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.059 "name": "Existed_Raid", 00:32:02.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.059 "strip_size_kb": 64, 00:32:02.059 "state": "configuring", 00:32:02.059 "raid_level": "raid0", 00:32:02.059 "superblock": false, 00:32:02.059 "num_base_bdevs": 4, 00:32:02.059 "num_base_bdevs_discovered": 2, 00:32:02.059 "num_base_bdevs_operational": 4, 00:32:02.059 "base_bdevs_list": [ 00:32:02.059 { 00:32:02.059 "name": "BaseBdev1", 00:32:02.059 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:02.059 "is_configured": true, 00:32:02.059 "data_offset": 0, 00:32:02.059 "data_size": 65536 00:32:02.059 }, 00:32:02.059 { 00:32:02.059 "name": null, 00:32:02.059 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:02.059 "is_configured": false, 00:32:02.059 "data_offset": 0, 00:32:02.059 "data_size": 65536 00:32:02.059 }, 00:32:02.059 { 00:32:02.059 "name": null, 00:32:02.059 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:02.059 "is_configured": false, 00:32:02.059 "data_offset": 0, 00:32:02.059 "data_size": 65536 00:32:02.059 }, 00:32:02.059 { 00:32:02.059 "name": "BaseBdev4", 00:32:02.059 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:02.059 "is_configured": true, 00:32:02.059 "data_offset": 0, 00:32:02.059 "data_size": 65536 00:32:02.059 } 00:32:02.059 ] 00:32:02.059 }' 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.059 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.319 [2024-10-09 14:01:08.843013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.319 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.579 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.579 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:02.579 "name": "Existed_Raid", 00:32:02.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.579 "strip_size_kb": 64, 00:32:02.579 "state": "configuring", 00:32:02.579 "raid_level": "raid0", 00:32:02.579 "superblock": false, 00:32:02.579 "num_base_bdevs": 4, 00:32:02.579 "num_base_bdevs_discovered": 3, 00:32:02.579 "num_base_bdevs_operational": 4, 00:32:02.579 "base_bdevs_list": [ 00:32:02.579 { 00:32:02.579 "name": "BaseBdev1", 00:32:02.579 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:02.579 "is_configured": true, 00:32:02.579 "data_offset": 0, 00:32:02.579 "data_size": 65536 00:32:02.579 }, 00:32:02.579 { 00:32:02.579 "name": null, 00:32:02.579 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:02.579 "is_configured": false, 00:32:02.579 "data_offset": 0, 00:32:02.579 "data_size": 65536 00:32:02.579 }, 00:32:02.579 { 00:32:02.579 "name": "BaseBdev3", 00:32:02.579 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:02.579 "is_configured": true, 00:32:02.579 "data_offset": 0, 00:32:02.579 "data_size": 65536 00:32:02.579 }, 00:32:02.579 { 00:32:02.579 "name": "BaseBdev4", 00:32:02.579 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:02.579 "is_configured": true, 00:32:02.579 "data_offset": 0, 00:32:02.579 "data_size": 65536 00:32:02.579 } 00:32:02.579 ] 00:32:02.579 }' 00:32:02.579 14:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:02.579 14:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.838 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.838 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.838 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.838 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.839 [2024-10-09 14:01:09.343130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:02.839 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.097 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.097 "name": "Existed_Raid", 00:32:03.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.097 "strip_size_kb": 64, 00:32:03.097 "state": "configuring", 00:32:03.098 "raid_level": "raid0", 00:32:03.098 "superblock": false, 00:32:03.098 "num_base_bdevs": 4, 00:32:03.098 "num_base_bdevs_discovered": 2, 00:32:03.098 "num_base_bdevs_operational": 4, 00:32:03.098 "base_bdevs_list": [ 00:32:03.098 { 00:32:03.098 "name": null, 00:32:03.098 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:03.098 "is_configured": false, 00:32:03.098 "data_offset": 0, 00:32:03.098 "data_size": 65536 00:32:03.098 }, 00:32:03.098 { 00:32:03.098 "name": null, 00:32:03.098 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:03.098 "is_configured": false, 00:32:03.098 "data_offset": 0, 00:32:03.098 "data_size": 65536 00:32:03.098 }, 00:32:03.098 { 00:32:03.098 "name": "BaseBdev3", 00:32:03.098 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:03.098 "is_configured": true, 00:32:03.098 "data_offset": 0, 00:32:03.098 "data_size": 65536 00:32:03.098 }, 00:32:03.098 { 00:32:03.098 "name": "BaseBdev4", 00:32:03.098 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:03.098 "is_configured": true, 00:32:03.098 "data_offset": 0, 00:32:03.098 "data_size": 65536 00:32:03.098 } 00:32:03.098 ] 00:32:03.098 }' 00:32:03.098 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.098 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.357 [2024-10-09 14:01:09.837778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.357 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.357 "name": "Existed_Raid", 00:32:03.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.357 "strip_size_kb": 64, 00:32:03.357 "state": "configuring", 00:32:03.357 "raid_level": "raid0", 00:32:03.357 "superblock": false, 00:32:03.357 "num_base_bdevs": 4, 00:32:03.357 "num_base_bdevs_discovered": 3, 00:32:03.357 "num_base_bdevs_operational": 4, 00:32:03.357 "base_bdevs_list": [ 00:32:03.357 { 00:32:03.357 "name": null, 00:32:03.357 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:03.357 "is_configured": false, 00:32:03.357 "data_offset": 0, 00:32:03.357 "data_size": 65536 00:32:03.357 }, 00:32:03.357 { 00:32:03.357 "name": "BaseBdev2", 00:32:03.357 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:03.357 "is_configured": true, 00:32:03.357 "data_offset": 0, 00:32:03.357 "data_size": 65536 00:32:03.357 }, 00:32:03.357 { 00:32:03.357 "name": "BaseBdev3", 00:32:03.357 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:03.357 "is_configured": true, 00:32:03.357 "data_offset": 0, 00:32:03.357 "data_size": 65536 00:32:03.357 }, 00:32:03.357 { 00:32:03.357 "name": "BaseBdev4", 00:32:03.357 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:03.358 "is_configured": true, 00:32:03.358 "data_offset": 0, 00:32:03.358 "data_size": 65536 00:32:03.358 } 00:32:03.358 ] 00:32:03.358 }' 00:32:03.358 14:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.358 14:01:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 19f85e9f-81bd-46aa-8880-9b8d91cc7dd1 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 [2024-10-09 14:01:10.369004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:03.926 [2024-10-09 14:01:10.369051] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:32:03.926 [2024-10-09 14:01:10.369060] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:32:03.926 [2024-10-09 14:01:10.369340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:03.926 [2024-10-09 14:01:10.369447] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:32:03.926 [2024-10-09 14:01:10.369467] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:32:03.926 [2024-10-09 14:01:10.369642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:03.926 NewBaseBdev 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.926 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.926 [ 00:32:03.926 { 00:32:03.926 "name": "NewBaseBdev", 00:32:03.926 "aliases": [ 00:32:03.926 "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1" 00:32:03.926 ], 00:32:03.926 "product_name": "Malloc disk", 00:32:03.926 "block_size": 512, 00:32:03.926 "num_blocks": 65536, 00:32:03.926 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:03.926 "assigned_rate_limits": { 00:32:03.926 "rw_ios_per_sec": 0, 00:32:03.926 "rw_mbytes_per_sec": 0, 00:32:03.926 "r_mbytes_per_sec": 0, 00:32:03.926 "w_mbytes_per_sec": 0 00:32:03.926 }, 00:32:03.926 "claimed": true, 00:32:03.926 "claim_type": "exclusive_write", 00:32:03.926 "zoned": false, 00:32:03.926 "supported_io_types": { 00:32:03.926 "read": true, 00:32:03.926 "write": true, 00:32:03.926 "unmap": true, 00:32:03.926 "flush": true, 00:32:03.926 "reset": true, 00:32:03.926 "nvme_admin": false, 00:32:03.926 "nvme_io": false, 00:32:03.926 "nvme_io_md": false, 00:32:03.926 "write_zeroes": true, 00:32:03.926 "zcopy": true, 00:32:03.926 "get_zone_info": false, 00:32:03.926 "zone_management": false, 00:32:03.926 "zone_append": false, 00:32:03.926 "compare": false, 00:32:03.926 "compare_and_write": false, 00:32:03.926 "abort": true, 00:32:03.926 "seek_hole": false, 00:32:03.926 "seek_data": false, 00:32:03.926 "copy": true, 00:32:03.926 "nvme_iov_md": false 00:32:03.926 }, 00:32:03.926 "memory_domains": [ 00:32:03.926 { 00:32:03.926 "dma_device_id": "system", 00:32:03.926 "dma_device_type": 1 00:32:03.927 }, 00:32:03.927 { 00:32:03.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:03.927 "dma_device_type": 2 00:32:03.927 } 00:32:03.927 ], 00:32:03.927 "driver_specific": {} 00:32:03.927 } 00:32:03.927 ] 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:03.927 "name": "Existed_Raid", 00:32:03.927 "uuid": "092d37d8-7880-4a5b-8237-14459519fbe9", 00:32:03.927 "strip_size_kb": 64, 00:32:03.927 "state": "online", 00:32:03.927 "raid_level": "raid0", 00:32:03.927 "superblock": false, 00:32:03.927 "num_base_bdevs": 4, 00:32:03.927 "num_base_bdevs_discovered": 4, 00:32:03.927 "num_base_bdevs_operational": 4, 00:32:03.927 "base_bdevs_list": [ 00:32:03.927 { 00:32:03.927 "name": "NewBaseBdev", 00:32:03.927 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:03.927 "is_configured": true, 00:32:03.927 "data_offset": 0, 00:32:03.927 "data_size": 65536 00:32:03.927 }, 00:32:03.927 { 00:32:03.927 "name": "BaseBdev2", 00:32:03.927 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:03.927 "is_configured": true, 00:32:03.927 "data_offset": 0, 00:32:03.927 "data_size": 65536 00:32:03.927 }, 00:32:03.927 { 00:32:03.927 "name": "BaseBdev3", 00:32:03.927 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:03.927 "is_configured": true, 00:32:03.927 "data_offset": 0, 00:32:03.927 "data_size": 65536 00:32:03.927 }, 00:32:03.927 { 00:32:03.927 "name": "BaseBdev4", 00:32:03.927 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:03.927 "is_configured": true, 00:32:03.927 "data_offset": 0, 00:32:03.927 "data_size": 65536 00:32:03.927 } 00:32:03.927 ] 00:32:03.927 }' 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:03.927 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.495 [2024-10-09 14:01:10.861491] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.495 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:04.495 "name": "Existed_Raid", 00:32:04.495 "aliases": [ 00:32:04.495 "092d37d8-7880-4a5b-8237-14459519fbe9" 00:32:04.495 ], 00:32:04.495 "product_name": "Raid Volume", 00:32:04.495 "block_size": 512, 00:32:04.495 "num_blocks": 262144, 00:32:04.495 "uuid": "092d37d8-7880-4a5b-8237-14459519fbe9", 00:32:04.495 "assigned_rate_limits": { 00:32:04.495 "rw_ios_per_sec": 0, 00:32:04.495 "rw_mbytes_per_sec": 0, 00:32:04.495 "r_mbytes_per_sec": 0, 00:32:04.495 "w_mbytes_per_sec": 0 00:32:04.495 }, 00:32:04.495 "claimed": false, 00:32:04.495 "zoned": false, 00:32:04.495 "supported_io_types": { 00:32:04.495 "read": true, 00:32:04.495 "write": true, 00:32:04.495 "unmap": true, 00:32:04.495 "flush": true, 00:32:04.495 "reset": true, 00:32:04.495 "nvme_admin": false, 00:32:04.495 "nvme_io": false, 00:32:04.495 "nvme_io_md": false, 00:32:04.495 "write_zeroes": true, 00:32:04.495 "zcopy": false, 00:32:04.495 "get_zone_info": false, 00:32:04.495 "zone_management": false, 00:32:04.495 "zone_append": false, 00:32:04.495 "compare": false, 00:32:04.495 "compare_and_write": false, 00:32:04.495 "abort": false, 00:32:04.495 "seek_hole": false, 00:32:04.495 "seek_data": false, 00:32:04.495 "copy": false, 00:32:04.495 "nvme_iov_md": false 00:32:04.495 }, 00:32:04.495 "memory_domains": [ 00:32:04.495 { 00:32:04.495 "dma_device_id": "system", 00:32:04.495 "dma_device_type": 1 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:04.495 "dma_device_type": 2 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "system", 00:32:04.495 "dma_device_type": 1 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:04.495 "dma_device_type": 2 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "system", 00:32:04.495 "dma_device_type": 1 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:04.495 "dma_device_type": 2 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "system", 00:32:04.495 "dma_device_type": 1 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:04.495 "dma_device_type": 2 00:32:04.495 } 00:32:04.495 ], 00:32:04.495 "driver_specific": { 00:32:04.495 "raid": { 00:32:04.495 "uuid": "092d37d8-7880-4a5b-8237-14459519fbe9", 00:32:04.495 "strip_size_kb": 64, 00:32:04.495 "state": "online", 00:32:04.495 "raid_level": "raid0", 00:32:04.495 "superblock": false, 00:32:04.495 "num_base_bdevs": 4, 00:32:04.495 "num_base_bdevs_discovered": 4, 00:32:04.495 "num_base_bdevs_operational": 4, 00:32:04.495 "base_bdevs_list": [ 00:32:04.495 { 00:32:04.495 "name": "NewBaseBdev", 00:32:04.495 "uuid": "19f85e9f-81bd-46aa-8880-9b8d91cc7dd1", 00:32:04.495 "is_configured": true, 00:32:04.495 "data_offset": 0, 00:32:04.495 "data_size": 65536 00:32:04.495 }, 00:32:04.495 { 00:32:04.495 "name": "BaseBdev2", 00:32:04.495 "uuid": "3a26df59-9201-48ff-a0db-7f7d005c4350", 00:32:04.495 "is_configured": true, 00:32:04.495 "data_offset": 0, 00:32:04.495 "data_size": 65536 00:32:04.496 }, 00:32:04.496 { 00:32:04.496 "name": "BaseBdev3", 00:32:04.496 "uuid": "f5906261-98de-46cd-beb9-aa06b1b9e899", 00:32:04.496 "is_configured": true, 00:32:04.496 "data_offset": 0, 00:32:04.496 "data_size": 65536 00:32:04.496 }, 00:32:04.496 { 00:32:04.496 "name": "BaseBdev4", 00:32:04.496 "uuid": "cb025353-1d41-4542-841c-60918fb098fd", 00:32:04.496 "is_configured": true, 00:32:04.496 "data_offset": 0, 00:32:04.496 "data_size": 65536 00:32:04.496 } 00:32:04.496 ] 00:32:04.496 } 00:32:04.496 } 00:32:04.496 }' 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:04.496 BaseBdev2 00:32:04.496 BaseBdev3 00:32:04.496 BaseBdev4' 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.496 14:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.496 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:04.755 [2024-10-09 14:01:11.177226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:04.755 [2024-10-09 14:01:11.177263] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:04.755 [2024-10-09 14:01:11.177345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:04.755 [2024-10-09 14:01:11.177414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:04.755 [2024-10-09 14:01:11.177425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80704 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80704 ']' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80704 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80704 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:04.755 killing process with pid 80704 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80704' 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80704 00:32:04.755 [2024-10-09 14:01:11.223444] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:04.755 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80704 00:32:04.755 [2024-10-09 14:01:11.265016] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:05.014 14:01:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:05.014 00:32:05.014 real 0m9.602s 00:32:05.014 user 0m16.664s 00:32:05.014 sys 0m1.976s 00:32:05.014 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.014 14:01:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:05.014 ************************************ 00:32:05.014 END TEST raid_state_function_test 00:32:05.014 ************************************ 00:32:05.274 14:01:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:32:05.274 14:01:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:05.274 14:01:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:05.274 14:01:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:05.274 ************************************ 00:32:05.274 START TEST raid_state_function_test_sb 00:32:05.274 ************************************ 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81353 00:32:05.274 Process raid pid: 81353 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81353' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81353 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81353 ']' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:05.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:05.274 14:01:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:05.274 [2024-10-09 14:01:11.700438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:05.275 [2024-10-09 14:01:11.700650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.534 [2024-10-09 14:01:11.878095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.534 [2024-10-09 14:01:11.925554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.534 [2024-10-09 14:01:11.970086] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:05.534 [2024-10-09 14:01:11.970128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.102 [2024-10-09 14:01:12.561488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:06.102 [2024-10-09 14:01:12.561543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:06.102 [2024-10-09 14:01:12.561571] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:06.102 [2024-10-09 14:01:12.561602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:06.102 [2024-10-09 14:01:12.561611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:06.102 [2024-10-09 14:01:12.561628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:06.102 [2024-10-09 14:01:12.561636] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:06.102 [2024-10-09 14:01:12.561649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.102 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:06.102 "name": "Existed_Raid", 00:32:06.102 "uuid": "f606f1bd-0874-473e-837c-ae809c24b33f", 00:32:06.102 "strip_size_kb": 64, 00:32:06.102 "state": "configuring", 00:32:06.102 "raid_level": "raid0", 00:32:06.102 "superblock": true, 00:32:06.102 "num_base_bdevs": 4, 00:32:06.102 "num_base_bdevs_discovered": 0, 00:32:06.102 "num_base_bdevs_operational": 4, 00:32:06.102 "base_bdevs_list": [ 00:32:06.102 { 00:32:06.102 "name": "BaseBdev1", 00:32:06.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.102 "is_configured": false, 00:32:06.102 "data_offset": 0, 00:32:06.102 "data_size": 0 00:32:06.102 }, 00:32:06.102 { 00:32:06.102 "name": "BaseBdev2", 00:32:06.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.102 "is_configured": false, 00:32:06.102 "data_offset": 0, 00:32:06.102 "data_size": 0 00:32:06.102 }, 00:32:06.102 { 00:32:06.102 "name": "BaseBdev3", 00:32:06.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.102 "is_configured": false, 00:32:06.102 "data_offset": 0, 00:32:06.102 "data_size": 0 00:32:06.102 }, 00:32:06.102 { 00:32:06.102 "name": "BaseBdev4", 00:32:06.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.102 "is_configured": false, 00:32:06.102 "data_offset": 0, 00:32:06.103 "data_size": 0 00:32:06.103 } 00:32:06.103 ] 00:32:06.103 }' 00:32:06.103 14:01:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:06.103 14:01:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 [2024-10-09 14:01:13.021517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:06.673 [2024-10-09 14:01:13.021578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 [2024-10-09 14:01:13.033594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:06.673 [2024-10-09 14:01:13.033642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:06.673 [2024-10-09 14:01:13.033652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:06.673 [2024-10-09 14:01:13.033681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:06.673 [2024-10-09 14:01:13.033706] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:06.673 [2024-10-09 14:01:13.033721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:06.673 [2024-10-09 14:01:13.033730] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:06.673 [2024-10-09 14:01:13.033744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 [2024-10-09 14:01:13.051423] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:06.673 BaseBdev1 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.673 [ 00:32:06.673 { 00:32:06.673 "name": "BaseBdev1", 00:32:06.673 "aliases": [ 00:32:06.673 "fc125191-147f-452c-b615-fcfa10acf580" 00:32:06.673 ], 00:32:06.673 "product_name": "Malloc disk", 00:32:06.673 "block_size": 512, 00:32:06.673 "num_blocks": 65536, 00:32:06.673 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:06.673 "assigned_rate_limits": { 00:32:06.673 "rw_ios_per_sec": 0, 00:32:06.673 "rw_mbytes_per_sec": 0, 00:32:06.673 "r_mbytes_per_sec": 0, 00:32:06.673 "w_mbytes_per_sec": 0 00:32:06.673 }, 00:32:06.673 "claimed": true, 00:32:06.673 "claim_type": "exclusive_write", 00:32:06.673 "zoned": false, 00:32:06.673 "supported_io_types": { 00:32:06.673 "read": true, 00:32:06.673 "write": true, 00:32:06.673 "unmap": true, 00:32:06.673 "flush": true, 00:32:06.673 "reset": true, 00:32:06.673 "nvme_admin": false, 00:32:06.673 "nvme_io": false, 00:32:06.673 "nvme_io_md": false, 00:32:06.673 "write_zeroes": true, 00:32:06.673 "zcopy": true, 00:32:06.673 "get_zone_info": false, 00:32:06.673 "zone_management": false, 00:32:06.673 "zone_append": false, 00:32:06.673 "compare": false, 00:32:06.673 "compare_and_write": false, 00:32:06.673 "abort": true, 00:32:06.673 "seek_hole": false, 00:32:06.673 "seek_data": false, 00:32:06.673 "copy": true, 00:32:06.673 "nvme_iov_md": false 00:32:06.673 }, 00:32:06.673 "memory_domains": [ 00:32:06.673 { 00:32:06.673 "dma_device_id": "system", 00:32:06.673 "dma_device_type": 1 00:32:06.673 }, 00:32:06.673 { 00:32:06.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.673 "dma_device_type": 2 00:32:06.673 } 00:32:06.673 ], 00:32:06.673 "driver_specific": {} 00:32:06.673 } 00:32:06.673 ] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:06.673 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:06.674 "name": "Existed_Raid", 00:32:06.674 "uuid": "bc77f96b-81a1-4ff1-b00c-ec4970a1e7a8", 00:32:06.674 "strip_size_kb": 64, 00:32:06.674 "state": "configuring", 00:32:06.674 "raid_level": "raid0", 00:32:06.674 "superblock": true, 00:32:06.674 "num_base_bdevs": 4, 00:32:06.674 "num_base_bdevs_discovered": 1, 00:32:06.674 "num_base_bdevs_operational": 4, 00:32:06.674 "base_bdevs_list": [ 00:32:06.674 { 00:32:06.674 "name": "BaseBdev1", 00:32:06.674 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:06.674 "is_configured": true, 00:32:06.674 "data_offset": 2048, 00:32:06.674 "data_size": 63488 00:32:06.674 }, 00:32:06.674 { 00:32:06.674 "name": "BaseBdev2", 00:32:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.674 "is_configured": false, 00:32:06.674 "data_offset": 0, 00:32:06.674 "data_size": 0 00:32:06.674 }, 00:32:06.674 { 00:32:06.674 "name": "BaseBdev3", 00:32:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.674 "is_configured": false, 00:32:06.674 "data_offset": 0, 00:32:06.674 "data_size": 0 00:32:06.674 }, 00:32:06.674 { 00:32:06.674 "name": "BaseBdev4", 00:32:06.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:06.674 "is_configured": false, 00:32:06.674 "data_offset": 0, 00:32:06.674 "data_size": 0 00:32:06.674 } 00:32:06.674 ] 00:32:06.674 }' 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:06.674 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.250 [2024-10-09 14:01:13.531582] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:07.250 [2024-10-09 14:01:13.531641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.250 [2024-10-09 14:01:13.543632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:07.250 [2024-10-09 14:01:13.545884] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:07.250 [2024-10-09 14:01:13.545928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:07.250 [2024-10-09 14:01:13.545939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:07.250 [2024-10-09 14:01:13.545952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:07.250 [2024-10-09 14:01:13.545960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:07.250 [2024-10-09 14:01:13.545971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.250 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.250 "name": "Existed_Raid", 00:32:07.250 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:07.250 "strip_size_kb": 64, 00:32:07.250 "state": "configuring", 00:32:07.250 "raid_level": "raid0", 00:32:07.250 "superblock": true, 00:32:07.250 "num_base_bdevs": 4, 00:32:07.250 "num_base_bdevs_discovered": 1, 00:32:07.250 "num_base_bdevs_operational": 4, 00:32:07.250 "base_bdevs_list": [ 00:32:07.250 { 00:32:07.250 "name": "BaseBdev1", 00:32:07.250 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:07.250 "is_configured": true, 00:32:07.250 "data_offset": 2048, 00:32:07.250 "data_size": 63488 00:32:07.250 }, 00:32:07.250 { 00:32:07.250 "name": "BaseBdev2", 00:32:07.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.250 "is_configured": false, 00:32:07.250 "data_offset": 0, 00:32:07.250 "data_size": 0 00:32:07.250 }, 00:32:07.250 { 00:32:07.251 "name": "BaseBdev3", 00:32:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.251 "is_configured": false, 00:32:07.251 "data_offset": 0, 00:32:07.251 "data_size": 0 00:32:07.251 }, 00:32:07.251 { 00:32:07.251 "name": "BaseBdev4", 00:32:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.251 "is_configured": false, 00:32:07.251 "data_offset": 0, 00:32:07.251 "data_size": 0 00:32:07.251 } 00:32:07.251 ] 00:32:07.251 }' 00:32:07.251 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.251 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.510 [2024-10-09 14:01:13.985953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:07.510 BaseBdev2 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.510 14:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.510 [ 00:32:07.510 { 00:32:07.510 "name": "BaseBdev2", 00:32:07.510 "aliases": [ 00:32:07.510 "c8938da8-fc49-413a-b8d4-0e4c982961f9" 00:32:07.510 ], 00:32:07.510 "product_name": "Malloc disk", 00:32:07.510 "block_size": 512, 00:32:07.510 "num_blocks": 65536, 00:32:07.510 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:07.510 "assigned_rate_limits": { 00:32:07.510 "rw_ios_per_sec": 0, 00:32:07.510 "rw_mbytes_per_sec": 0, 00:32:07.510 "r_mbytes_per_sec": 0, 00:32:07.510 "w_mbytes_per_sec": 0 00:32:07.510 }, 00:32:07.510 "claimed": true, 00:32:07.510 "claim_type": "exclusive_write", 00:32:07.510 "zoned": false, 00:32:07.510 "supported_io_types": { 00:32:07.510 "read": true, 00:32:07.510 "write": true, 00:32:07.510 "unmap": true, 00:32:07.510 "flush": true, 00:32:07.510 "reset": true, 00:32:07.510 "nvme_admin": false, 00:32:07.510 "nvme_io": false, 00:32:07.510 "nvme_io_md": false, 00:32:07.510 "write_zeroes": true, 00:32:07.510 "zcopy": true, 00:32:07.510 "get_zone_info": false, 00:32:07.510 "zone_management": false, 00:32:07.510 "zone_append": false, 00:32:07.510 "compare": false, 00:32:07.510 "compare_and_write": false, 00:32:07.510 "abort": true, 00:32:07.510 "seek_hole": false, 00:32:07.510 "seek_data": false, 00:32:07.510 "copy": true, 00:32:07.510 "nvme_iov_md": false 00:32:07.510 }, 00:32:07.510 "memory_domains": [ 00:32:07.510 { 00:32:07.510 "dma_device_id": "system", 00:32:07.510 "dma_device_type": 1 00:32:07.510 }, 00:32:07.510 { 00:32:07.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:07.510 "dma_device_type": 2 00:32:07.510 } 00:32:07.510 ], 00:32:07.510 "driver_specific": {} 00:32:07.510 } 00:32:07.510 ] 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.510 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:07.510 "name": "Existed_Raid", 00:32:07.510 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:07.510 "strip_size_kb": 64, 00:32:07.510 "state": "configuring", 00:32:07.510 "raid_level": "raid0", 00:32:07.510 "superblock": true, 00:32:07.510 "num_base_bdevs": 4, 00:32:07.510 "num_base_bdevs_discovered": 2, 00:32:07.510 "num_base_bdevs_operational": 4, 00:32:07.510 "base_bdevs_list": [ 00:32:07.510 { 00:32:07.510 "name": "BaseBdev1", 00:32:07.510 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:07.510 "is_configured": true, 00:32:07.510 "data_offset": 2048, 00:32:07.510 "data_size": 63488 00:32:07.510 }, 00:32:07.510 { 00:32:07.510 "name": "BaseBdev2", 00:32:07.510 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:07.510 "is_configured": true, 00:32:07.510 "data_offset": 2048, 00:32:07.510 "data_size": 63488 00:32:07.510 }, 00:32:07.510 { 00:32:07.510 "name": "BaseBdev3", 00:32:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.510 "is_configured": false, 00:32:07.510 "data_offset": 0, 00:32:07.510 "data_size": 0 00:32:07.510 }, 00:32:07.510 { 00:32:07.510 "name": "BaseBdev4", 00:32:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.510 "is_configured": false, 00:32:07.510 "data_offset": 0, 00:32:07.510 "data_size": 0 00:32:07.510 } 00:32:07.510 ] 00:32:07.510 }' 00:32:07.770 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:07.770 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.029 [2024-10-09 14:01:14.449233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:08.029 BaseBdev3 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.029 [ 00:32:08.029 { 00:32:08.029 "name": "BaseBdev3", 00:32:08.029 "aliases": [ 00:32:08.029 "7ac47c9e-4298-43db-a5e0-23719754f77f" 00:32:08.029 ], 00:32:08.029 "product_name": "Malloc disk", 00:32:08.029 "block_size": 512, 00:32:08.029 "num_blocks": 65536, 00:32:08.029 "uuid": "7ac47c9e-4298-43db-a5e0-23719754f77f", 00:32:08.029 "assigned_rate_limits": { 00:32:08.029 "rw_ios_per_sec": 0, 00:32:08.029 "rw_mbytes_per_sec": 0, 00:32:08.029 "r_mbytes_per_sec": 0, 00:32:08.029 "w_mbytes_per_sec": 0 00:32:08.029 }, 00:32:08.029 "claimed": true, 00:32:08.029 "claim_type": "exclusive_write", 00:32:08.029 "zoned": false, 00:32:08.029 "supported_io_types": { 00:32:08.029 "read": true, 00:32:08.029 "write": true, 00:32:08.029 "unmap": true, 00:32:08.029 "flush": true, 00:32:08.029 "reset": true, 00:32:08.029 "nvme_admin": false, 00:32:08.029 "nvme_io": false, 00:32:08.029 "nvme_io_md": false, 00:32:08.029 "write_zeroes": true, 00:32:08.029 "zcopy": true, 00:32:08.029 "get_zone_info": false, 00:32:08.029 "zone_management": false, 00:32:08.029 "zone_append": false, 00:32:08.029 "compare": false, 00:32:08.029 "compare_and_write": false, 00:32:08.029 "abort": true, 00:32:08.029 "seek_hole": false, 00:32:08.029 "seek_data": false, 00:32:08.029 "copy": true, 00:32:08.029 "nvme_iov_md": false 00:32:08.029 }, 00:32:08.029 "memory_domains": [ 00:32:08.029 { 00:32:08.029 "dma_device_id": "system", 00:32:08.029 "dma_device_type": 1 00:32:08.029 }, 00:32:08.029 { 00:32:08.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.029 "dma_device_type": 2 00:32:08.029 } 00:32:08.029 ], 00:32:08.029 "driver_specific": {} 00:32:08.029 } 00:32:08.029 ] 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:08.029 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:08.030 "name": "Existed_Raid", 00:32:08.030 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:08.030 "strip_size_kb": 64, 00:32:08.030 "state": "configuring", 00:32:08.030 "raid_level": "raid0", 00:32:08.030 "superblock": true, 00:32:08.030 "num_base_bdevs": 4, 00:32:08.030 "num_base_bdevs_discovered": 3, 00:32:08.030 "num_base_bdevs_operational": 4, 00:32:08.030 "base_bdevs_list": [ 00:32:08.030 { 00:32:08.030 "name": "BaseBdev1", 00:32:08.030 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:08.030 "is_configured": true, 00:32:08.030 "data_offset": 2048, 00:32:08.030 "data_size": 63488 00:32:08.030 }, 00:32:08.030 { 00:32:08.030 "name": "BaseBdev2", 00:32:08.030 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:08.030 "is_configured": true, 00:32:08.030 "data_offset": 2048, 00:32:08.030 "data_size": 63488 00:32:08.030 }, 00:32:08.030 { 00:32:08.030 "name": "BaseBdev3", 00:32:08.030 "uuid": "7ac47c9e-4298-43db-a5e0-23719754f77f", 00:32:08.030 "is_configured": true, 00:32:08.030 "data_offset": 2048, 00:32:08.030 "data_size": 63488 00:32:08.030 }, 00:32:08.030 { 00:32:08.030 "name": "BaseBdev4", 00:32:08.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.030 "is_configured": false, 00:32:08.030 "data_offset": 0, 00:32:08.030 "data_size": 0 00:32:08.030 } 00:32:08.030 ] 00:32:08.030 }' 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:08.030 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.598 [2024-10-09 14:01:14.916579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:08.598 [2024-10-09 14:01:14.916772] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:32:08.598 [2024-10-09 14:01:14.916788] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:08.598 [2024-10-09 14:01:14.917086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:08.598 [2024-10-09 14:01:14.917199] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:32:08.598 [2024-10-09 14:01:14.917221] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:32:08.598 [2024-10-09 14:01:14.917330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.598 BaseBdev4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.598 [ 00:32:08.598 { 00:32:08.598 "name": "BaseBdev4", 00:32:08.598 "aliases": [ 00:32:08.598 "55935288-17fd-422b-91f5-7c465358f56b" 00:32:08.598 ], 00:32:08.598 "product_name": "Malloc disk", 00:32:08.598 "block_size": 512, 00:32:08.598 "num_blocks": 65536, 00:32:08.598 "uuid": "55935288-17fd-422b-91f5-7c465358f56b", 00:32:08.598 "assigned_rate_limits": { 00:32:08.598 "rw_ios_per_sec": 0, 00:32:08.598 "rw_mbytes_per_sec": 0, 00:32:08.598 "r_mbytes_per_sec": 0, 00:32:08.598 "w_mbytes_per_sec": 0 00:32:08.598 }, 00:32:08.598 "claimed": true, 00:32:08.598 "claim_type": "exclusive_write", 00:32:08.598 "zoned": false, 00:32:08.598 "supported_io_types": { 00:32:08.598 "read": true, 00:32:08.598 "write": true, 00:32:08.598 "unmap": true, 00:32:08.598 "flush": true, 00:32:08.598 "reset": true, 00:32:08.598 "nvme_admin": false, 00:32:08.598 "nvme_io": false, 00:32:08.598 "nvme_io_md": false, 00:32:08.598 "write_zeroes": true, 00:32:08.598 "zcopy": true, 00:32:08.598 "get_zone_info": false, 00:32:08.598 "zone_management": false, 00:32:08.598 "zone_append": false, 00:32:08.598 "compare": false, 00:32:08.598 "compare_and_write": false, 00:32:08.598 "abort": true, 00:32:08.598 "seek_hole": false, 00:32:08.598 "seek_data": false, 00:32:08.598 "copy": true, 00:32:08.598 "nvme_iov_md": false 00:32:08.598 }, 00:32:08.598 "memory_domains": [ 00:32:08.598 { 00:32:08.598 "dma_device_id": "system", 00:32:08.598 "dma_device_type": 1 00:32:08.598 }, 00:32:08.598 { 00:32:08.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.598 "dma_device_type": 2 00:32:08.598 } 00:32:08.598 ], 00:32:08.598 "driver_specific": {} 00:32:08.598 } 00:32:08.598 ] 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.598 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.599 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.599 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:08.599 "name": "Existed_Raid", 00:32:08.599 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:08.599 "strip_size_kb": 64, 00:32:08.599 "state": "online", 00:32:08.599 "raid_level": "raid0", 00:32:08.599 "superblock": true, 00:32:08.599 "num_base_bdevs": 4, 00:32:08.599 "num_base_bdevs_discovered": 4, 00:32:08.599 "num_base_bdevs_operational": 4, 00:32:08.599 "base_bdevs_list": [ 00:32:08.599 { 00:32:08.599 "name": "BaseBdev1", 00:32:08.599 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:08.599 "is_configured": true, 00:32:08.599 "data_offset": 2048, 00:32:08.599 "data_size": 63488 00:32:08.599 }, 00:32:08.599 { 00:32:08.599 "name": "BaseBdev2", 00:32:08.599 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:08.599 "is_configured": true, 00:32:08.599 "data_offset": 2048, 00:32:08.599 "data_size": 63488 00:32:08.599 }, 00:32:08.599 { 00:32:08.599 "name": "BaseBdev3", 00:32:08.599 "uuid": "7ac47c9e-4298-43db-a5e0-23719754f77f", 00:32:08.599 "is_configured": true, 00:32:08.599 "data_offset": 2048, 00:32:08.599 "data_size": 63488 00:32:08.599 }, 00:32:08.599 { 00:32:08.599 "name": "BaseBdev4", 00:32:08.599 "uuid": "55935288-17fd-422b-91f5-7c465358f56b", 00:32:08.599 "is_configured": true, 00:32:08.599 "data_offset": 2048, 00:32:08.599 "data_size": 63488 00:32:08.599 } 00:32:08.599 ] 00:32:08.599 }' 00:32:08.599 14:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:08.599 14:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.858 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.858 [2024-10-09 14:01:15.401090] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:09.118 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.118 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:09.118 "name": "Existed_Raid", 00:32:09.118 "aliases": [ 00:32:09.118 "17c213fa-9c4e-4e3e-bb27-ee506425e86a" 00:32:09.118 ], 00:32:09.118 "product_name": "Raid Volume", 00:32:09.118 "block_size": 512, 00:32:09.118 "num_blocks": 253952, 00:32:09.118 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:09.118 "assigned_rate_limits": { 00:32:09.118 "rw_ios_per_sec": 0, 00:32:09.118 "rw_mbytes_per_sec": 0, 00:32:09.118 "r_mbytes_per_sec": 0, 00:32:09.118 "w_mbytes_per_sec": 0 00:32:09.118 }, 00:32:09.118 "claimed": false, 00:32:09.118 "zoned": false, 00:32:09.118 "supported_io_types": { 00:32:09.118 "read": true, 00:32:09.118 "write": true, 00:32:09.118 "unmap": true, 00:32:09.118 "flush": true, 00:32:09.118 "reset": true, 00:32:09.118 "nvme_admin": false, 00:32:09.118 "nvme_io": false, 00:32:09.118 "nvme_io_md": false, 00:32:09.118 "write_zeroes": true, 00:32:09.118 "zcopy": false, 00:32:09.118 "get_zone_info": false, 00:32:09.118 "zone_management": false, 00:32:09.118 "zone_append": false, 00:32:09.118 "compare": false, 00:32:09.118 "compare_and_write": false, 00:32:09.118 "abort": false, 00:32:09.118 "seek_hole": false, 00:32:09.118 "seek_data": false, 00:32:09.118 "copy": false, 00:32:09.118 "nvme_iov_md": false 00:32:09.118 }, 00:32:09.118 "memory_domains": [ 00:32:09.118 { 00:32:09.118 "dma_device_id": "system", 00:32:09.119 "dma_device_type": 1 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.119 "dma_device_type": 2 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "system", 00:32:09.119 "dma_device_type": 1 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.119 "dma_device_type": 2 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "system", 00:32:09.119 "dma_device_type": 1 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.119 "dma_device_type": 2 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "system", 00:32:09.119 "dma_device_type": 1 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:09.119 "dma_device_type": 2 00:32:09.119 } 00:32:09.119 ], 00:32:09.119 "driver_specific": { 00:32:09.119 "raid": { 00:32:09.119 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:09.119 "strip_size_kb": 64, 00:32:09.119 "state": "online", 00:32:09.119 "raid_level": "raid0", 00:32:09.119 "superblock": true, 00:32:09.119 "num_base_bdevs": 4, 00:32:09.119 "num_base_bdevs_discovered": 4, 00:32:09.119 "num_base_bdevs_operational": 4, 00:32:09.119 "base_bdevs_list": [ 00:32:09.119 { 00:32:09.119 "name": "BaseBdev1", 00:32:09.119 "uuid": "fc125191-147f-452c-b615-fcfa10acf580", 00:32:09.119 "is_configured": true, 00:32:09.119 "data_offset": 2048, 00:32:09.119 "data_size": 63488 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "name": "BaseBdev2", 00:32:09.119 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:09.119 "is_configured": true, 00:32:09.119 "data_offset": 2048, 00:32:09.119 "data_size": 63488 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "name": "BaseBdev3", 00:32:09.119 "uuid": "7ac47c9e-4298-43db-a5e0-23719754f77f", 00:32:09.119 "is_configured": true, 00:32:09.119 "data_offset": 2048, 00:32:09.119 "data_size": 63488 00:32:09.119 }, 00:32:09.119 { 00:32:09.119 "name": "BaseBdev4", 00:32:09.119 "uuid": "55935288-17fd-422b-91f5-7c465358f56b", 00:32:09.119 "is_configured": true, 00:32:09.119 "data_offset": 2048, 00:32:09.119 "data_size": 63488 00:32:09.119 } 00:32:09.119 ] 00:32:09.119 } 00:32:09.119 } 00:32:09.119 }' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:09.119 BaseBdev2 00:32:09.119 BaseBdev3 00:32:09.119 BaseBdev4' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.119 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.379 [2024-10-09 14:01:15.704827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:09.379 [2024-10-09 14:01:15.704869] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:09.379 [2024-10-09 14:01:15.704936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:09.379 "name": "Existed_Raid", 00:32:09.379 "uuid": "17c213fa-9c4e-4e3e-bb27-ee506425e86a", 00:32:09.379 "strip_size_kb": 64, 00:32:09.379 "state": "offline", 00:32:09.379 "raid_level": "raid0", 00:32:09.379 "superblock": true, 00:32:09.379 "num_base_bdevs": 4, 00:32:09.379 "num_base_bdevs_discovered": 3, 00:32:09.379 "num_base_bdevs_operational": 3, 00:32:09.379 "base_bdevs_list": [ 00:32:09.379 { 00:32:09.379 "name": null, 00:32:09.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:09.379 "is_configured": false, 00:32:09.379 "data_offset": 0, 00:32:09.379 "data_size": 63488 00:32:09.379 }, 00:32:09.379 { 00:32:09.379 "name": "BaseBdev2", 00:32:09.379 "uuid": "c8938da8-fc49-413a-b8d4-0e4c982961f9", 00:32:09.379 "is_configured": true, 00:32:09.379 "data_offset": 2048, 00:32:09.379 "data_size": 63488 00:32:09.379 }, 00:32:09.379 { 00:32:09.379 "name": "BaseBdev3", 00:32:09.379 "uuid": "7ac47c9e-4298-43db-a5e0-23719754f77f", 00:32:09.379 "is_configured": true, 00:32:09.379 "data_offset": 2048, 00:32:09.379 "data_size": 63488 00:32:09.379 }, 00:32:09.379 { 00:32:09.379 "name": "BaseBdev4", 00:32:09.379 "uuid": "55935288-17fd-422b-91f5-7c465358f56b", 00:32:09.379 "is_configured": true, 00:32:09.379 "data_offset": 2048, 00:32:09.379 "data_size": 63488 00:32:09.379 } 00:32:09.379 ] 00:32:09.379 }' 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:09.379 14:01:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.638 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 [2024-10-09 14:01:16.213016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 [2024-10-09 14:01:16.276998] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 [2024-10-09 14:01:16.341000] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:09.898 [2024-10-09 14:01:16.341048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.898 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.899 BaseBdev2 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:09.899 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:09.899 [ 00:32:09.899 { 00:32:09.899 "name": "BaseBdev2", 00:32:09.899 "aliases": [ 00:32:09.899 "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681" 00:32:09.899 ], 00:32:09.899 "product_name": "Malloc disk", 00:32:09.899 "block_size": 512, 00:32:09.899 "num_blocks": 65536, 00:32:09.899 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:09.899 "assigned_rate_limits": { 00:32:09.899 "rw_ios_per_sec": 0, 00:32:09.899 "rw_mbytes_per_sec": 0, 00:32:09.899 "r_mbytes_per_sec": 0, 00:32:09.899 "w_mbytes_per_sec": 0 00:32:09.899 }, 00:32:09.899 "claimed": false, 00:32:09.899 "zoned": false, 00:32:09.899 "supported_io_types": { 00:32:09.899 "read": true, 00:32:09.899 "write": true, 00:32:09.899 "unmap": true, 00:32:09.899 "flush": true, 00:32:09.899 "reset": true, 00:32:09.899 "nvme_admin": false, 00:32:09.899 "nvme_io": false, 00:32:09.899 "nvme_io_md": false, 00:32:09.899 "write_zeroes": true, 00:32:09.899 "zcopy": true, 00:32:09.899 "get_zone_info": false, 00:32:09.899 "zone_management": false, 00:32:09.899 "zone_append": false, 00:32:09.899 "compare": false, 00:32:09.899 "compare_and_write": false, 00:32:09.899 "abort": true, 00:32:09.899 "seek_hole": false, 00:32:09.899 "seek_data": false, 00:32:10.159 "copy": true, 00:32:10.159 "nvme_iov_md": false 00:32:10.159 }, 00:32:10.159 "memory_domains": [ 00:32:10.159 { 00:32:10.159 "dma_device_id": "system", 00:32:10.159 "dma_device_type": 1 00:32:10.159 }, 00:32:10.159 { 00:32:10.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.159 "dma_device_type": 2 00:32:10.159 } 00:32:10.159 ], 00:32:10.159 "driver_specific": {} 00:32:10.159 } 00:32:10.159 ] 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.159 BaseBdev3 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.159 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.159 [ 00:32:10.159 { 00:32:10.159 "name": "BaseBdev3", 00:32:10.159 "aliases": [ 00:32:10.159 "ffdadb04-ce7c-4f12-a928-a80c8277add1" 00:32:10.159 ], 00:32:10.159 "product_name": "Malloc disk", 00:32:10.159 "block_size": 512, 00:32:10.159 "num_blocks": 65536, 00:32:10.159 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:10.159 "assigned_rate_limits": { 00:32:10.159 "rw_ios_per_sec": 0, 00:32:10.159 "rw_mbytes_per_sec": 0, 00:32:10.159 "r_mbytes_per_sec": 0, 00:32:10.159 "w_mbytes_per_sec": 0 00:32:10.159 }, 00:32:10.159 "claimed": false, 00:32:10.159 "zoned": false, 00:32:10.159 "supported_io_types": { 00:32:10.159 "read": true, 00:32:10.159 "write": true, 00:32:10.159 "unmap": true, 00:32:10.159 "flush": true, 00:32:10.159 "reset": true, 00:32:10.159 "nvme_admin": false, 00:32:10.159 "nvme_io": false, 00:32:10.160 "nvme_io_md": false, 00:32:10.160 "write_zeroes": true, 00:32:10.160 "zcopy": true, 00:32:10.160 "get_zone_info": false, 00:32:10.160 "zone_management": false, 00:32:10.160 "zone_append": false, 00:32:10.160 "compare": false, 00:32:10.160 "compare_and_write": false, 00:32:10.160 "abort": true, 00:32:10.160 "seek_hole": false, 00:32:10.160 "seek_data": false, 00:32:10.160 "copy": true, 00:32:10.160 "nvme_iov_md": false 00:32:10.160 }, 00:32:10.160 "memory_domains": [ 00:32:10.160 { 00:32:10.160 "dma_device_id": "system", 00:32:10.160 "dma_device_type": 1 00:32:10.160 }, 00:32:10.160 { 00:32:10.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.160 "dma_device_type": 2 00:32:10.160 } 00:32:10.160 ], 00:32:10.160 "driver_specific": {} 00:32:10.160 } 00:32:10.160 ] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.160 BaseBdev4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.160 [ 00:32:10.160 { 00:32:10.160 "name": "BaseBdev4", 00:32:10.160 "aliases": [ 00:32:10.160 "9e5a0b39-e051-4b0e-9fe3-080d0a86840b" 00:32:10.160 ], 00:32:10.160 "product_name": "Malloc disk", 00:32:10.160 "block_size": 512, 00:32:10.160 "num_blocks": 65536, 00:32:10.160 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:10.160 "assigned_rate_limits": { 00:32:10.160 "rw_ios_per_sec": 0, 00:32:10.160 "rw_mbytes_per_sec": 0, 00:32:10.160 "r_mbytes_per_sec": 0, 00:32:10.160 "w_mbytes_per_sec": 0 00:32:10.160 }, 00:32:10.160 "claimed": false, 00:32:10.160 "zoned": false, 00:32:10.160 "supported_io_types": { 00:32:10.160 "read": true, 00:32:10.160 "write": true, 00:32:10.160 "unmap": true, 00:32:10.160 "flush": true, 00:32:10.160 "reset": true, 00:32:10.160 "nvme_admin": false, 00:32:10.160 "nvme_io": false, 00:32:10.160 "nvme_io_md": false, 00:32:10.160 "write_zeroes": true, 00:32:10.160 "zcopy": true, 00:32:10.160 "get_zone_info": false, 00:32:10.160 "zone_management": false, 00:32:10.160 "zone_append": false, 00:32:10.160 "compare": false, 00:32:10.160 "compare_and_write": false, 00:32:10.160 "abort": true, 00:32:10.160 "seek_hole": false, 00:32:10.160 "seek_data": false, 00:32:10.160 "copy": true, 00:32:10.160 "nvme_iov_md": false 00:32:10.160 }, 00:32:10.160 "memory_domains": [ 00:32:10.160 { 00:32:10.160 "dma_device_id": "system", 00:32:10.160 "dma_device_type": 1 00:32:10.160 }, 00:32:10.160 { 00:32:10.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:10.160 "dma_device_type": 2 00:32:10.160 } 00:32:10.160 ], 00:32:10.160 "driver_specific": {} 00:32:10.160 } 00:32:10.160 ] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.160 [2024-10-09 14:01:16.547547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:10.160 [2024-10-09 14:01:16.547606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:10.160 [2024-10-09 14:01:16.547628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:10.160 [2024-10-09 14:01:16.549829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:10.160 [2024-10-09 14:01:16.549882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.160 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:10.160 "name": "Existed_Raid", 00:32:10.160 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:10.160 "strip_size_kb": 64, 00:32:10.160 "state": "configuring", 00:32:10.160 "raid_level": "raid0", 00:32:10.160 "superblock": true, 00:32:10.160 "num_base_bdevs": 4, 00:32:10.160 "num_base_bdevs_discovered": 3, 00:32:10.160 "num_base_bdevs_operational": 4, 00:32:10.160 "base_bdevs_list": [ 00:32:10.160 { 00:32:10.160 "name": "BaseBdev1", 00:32:10.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.160 "is_configured": false, 00:32:10.160 "data_offset": 0, 00:32:10.160 "data_size": 0 00:32:10.160 }, 00:32:10.160 { 00:32:10.160 "name": "BaseBdev2", 00:32:10.160 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:10.160 "is_configured": true, 00:32:10.160 "data_offset": 2048, 00:32:10.160 "data_size": 63488 00:32:10.160 }, 00:32:10.160 { 00:32:10.160 "name": "BaseBdev3", 00:32:10.160 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:10.160 "is_configured": true, 00:32:10.160 "data_offset": 2048, 00:32:10.160 "data_size": 63488 00:32:10.160 }, 00:32:10.160 { 00:32:10.161 "name": "BaseBdev4", 00:32:10.161 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:10.161 "is_configured": true, 00:32:10.161 "data_offset": 2048, 00:32:10.161 "data_size": 63488 00:32:10.161 } 00:32:10.161 ] 00:32:10.161 }' 00:32:10.161 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:10.161 14:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.729 14:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.729 [2024-10-09 14:01:17.007638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:10.729 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:10.730 "name": "Existed_Raid", 00:32:10.730 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:10.730 "strip_size_kb": 64, 00:32:10.730 "state": "configuring", 00:32:10.730 "raid_level": "raid0", 00:32:10.730 "superblock": true, 00:32:10.730 "num_base_bdevs": 4, 00:32:10.730 "num_base_bdevs_discovered": 2, 00:32:10.730 "num_base_bdevs_operational": 4, 00:32:10.730 "base_bdevs_list": [ 00:32:10.730 { 00:32:10.730 "name": "BaseBdev1", 00:32:10.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.730 "is_configured": false, 00:32:10.730 "data_offset": 0, 00:32:10.730 "data_size": 0 00:32:10.730 }, 00:32:10.730 { 00:32:10.730 "name": null, 00:32:10.730 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:10.730 "is_configured": false, 00:32:10.730 "data_offset": 0, 00:32:10.730 "data_size": 63488 00:32:10.730 }, 00:32:10.730 { 00:32:10.730 "name": "BaseBdev3", 00:32:10.730 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:10.730 "is_configured": true, 00:32:10.730 "data_offset": 2048, 00:32:10.730 "data_size": 63488 00:32:10.730 }, 00:32:10.730 { 00:32:10.730 "name": "BaseBdev4", 00:32:10.730 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:10.730 "is_configured": true, 00:32:10.730 "data_offset": 2048, 00:32:10.730 "data_size": 63488 00:32:10.730 } 00:32:10.730 ] 00:32:10.730 }' 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:10.730 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.989 [2024-10-09 14:01:17.522992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.989 BaseBdev1 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.989 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.990 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.249 [ 00:32:11.249 { 00:32:11.249 "name": "BaseBdev1", 00:32:11.249 "aliases": [ 00:32:11.249 "403047b9-ada6-4b22-952a-e47069b540c6" 00:32:11.249 ], 00:32:11.249 "product_name": "Malloc disk", 00:32:11.249 "block_size": 512, 00:32:11.249 "num_blocks": 65536, 00:32:11.249 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:11.249 "assigned_rate_limits": { 00:32:11.249 "rw_ios_per_sec": 0, 00:32:11.249 "rw_mbytes_per_sec": 0, 00:32:11.249 "r_mbytes_per_sec": 0, 00:32:11.249 "w_mbytes_per_sec": 0 00:32:11.249 }, 00:32:11.249 "claimed": true, 00:32:11.249 "claim_type": "exclusive_write", 00:32:11.249 "zoned": false, 00:32:11.249 "supported_io_types": { 00:32:11.249 "read": true, 00:32:11.249 "write": true, 00:32:11.249 "unmap": true, 00:32:11.249 "flush": true, 00:32:11.249 "reset": true, 00:32:11.249 "nvme_admin": false, 00:32:11.249 "nvme_io": false, 00:32:11.249 "nvme_io_md": false, 00:32:11.249 "write_zeroes": true, 00:32:11.249 "zcopy": true, 00:32:11.249 "get_zone_info": false, 00:32:11.249 "zone_management": false, 00:32:11.249 "zone_append": false, 00:32:11.249 "compare": false, 00:32:11.249 "compare_and_write": false, 00:32:11.249 "abort": true, 00:32:11.249 "seek_hole": false, 00:32:11.249 "seek_data": false, 00:32:11.249 "copy": true, 00:32:11.249 "nvme_iov_md": false 00:32:11.249 }, 00:32:11.249 "memory_domains": [ 00:32:11.249 { 00:32:11.249 "dma_device_id": "system", 00:32:11.249 "dma_device_type": 1 00:32:11.249 }, 00:32:11.249 { 00:32:11.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.249 "dma_device_type": 2 00:32:11.249 } 00:32:11.249 ], 00:32:11.249 "driver_specific": {} 00:32:11.249 } 00:32:11.249 ] 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.249 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.249 "name": "Existed_Raid", 00:32:11.249 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:11.249 "strip_size_kb": 64, 00:32:11.249 "state": "configuring", 00:32:11.249 "raid_level": "raid0", 00:32:11.249 "superblock": true, 00:32:11.249 "num_base_bdevs": 4, 00:32:11.249 "num_base_bdevs_discovered": 3, 00:32:11.249 "num_base_bdevs_operational": 4, 00:32:11.249 "base_bdevs_list": [ 00:32:11.249 { 00:32:11.249 "name": "BaseBdev1", 00:32:11.249 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:11.249 "is_configured": true, 00:32:11.249 "data_offset": 2048, 00:32:11.249 "data_size": 63488 00:32:11.249 }, 00:32:11.249 { 00:32:11.249 "name": null, 00:32:11.249 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:11.249 "is_configured": false, 00:32:11.250 "data_offset": 0, 00:32:11.250 "data_size": 63488 00:32:11.250 }, 00:32:11.250 { 00:32:11.250 "name": "BaseBdev3", 00:32:11.250 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:11.250 "is_configured": true, 00:32:11.250 "data_offset": 2048, 00:32:11.250 "data_size": 63488 00:32:11.250 }, 00:32:11.250 { 00:32:11.250 "name": "BaseBdev4", 00:32:11.250 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:11.250 "is_configured": true, 00:32:11.250 "data_offset": 2048, 00:32:11.250 "data_size": 63488 00:32:11.250 } 00:32:11.250 ] 00:32:11.250 }' 00:32:11.250 14:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.250 14:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.509 [2024-10-09 14:01:18.047147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.509 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.768 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.768 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.768 "name": "Existed_Raid", 00:32:11.768 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:11.768 "strip_size_kb": 64, 00:32:11.768 "state": "configuring", 00:32:11.768 "raid_level": "raid0", 00:32:11.768 "superblock": true, 00:32:11.768 "num_base_bdevs": 4, 00:32:11.768 "num_base_bdevs_discovered": 2, 00:32:11.768 "num_base_bdevs_operational": 4, 00:32:11.768 "base_bdevs_list": [ 00:32:11.768 { 00:32:11.768 "name": "BaseBdev1", 00:32:11.768 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:11.768 "is_configured": true, 00:32:11.768 "data_offset": 2048, 00:32:11.768 "data_size": 63488 00:32:11.768 }, 00:32:11.768 { 00:32:11.768 "name": null, 00:32:11.768 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:11.768 "is_configured": false, 00:32:11.768 "data_offset": 0, 00:32:11.768 "data_size": 63488 00:32:11.768 }, 00:32:11.768 { 00:32:11.768 "name": null, 00:32:11.768 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:11.768 "is_configured": false, 00:32:11.768 "data_offset": 0, 00:32:11.768 "data_size": 63488 00:32:11.768 }, 00:32:11.768 { 00:32:11.768 "name": "BaseBdev4", 00:32:11.768 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:11.768 "is_configured": true, 00:32:11.768 "data_offset": 2048, 00:32:11.768 "data_size": 63488 00:32:11.768 } 00:32:11.768 ] 00:32:11.768 }' 00:32:11.768 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.768 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.026 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.027 [2024-10-09 14:01:18.551336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.027 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.286 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.286 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:12.286 "name": "Existed_Raid", 00:32:12.286 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:12.286 "strip_size_kb": 64, 00:32:12.286 "state": "configuring", 00:32:12.286 "raid_level": "raid0", 00:32:12.286 "superblock": true, 00:32:12.286 "num_base_bdevs": 4, 00:32:12.286 "num_base_bdevs_discovered": 3, 00:32:12.286 "num_base_bdevs_operational": 4, 00:32:12.286 "base_bdevs_list": [ 00:32:12.286 { 00:32:12.286 "name": "BaseBdev1", 00:32:12.286 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:12.286 "is_configured": true, 00:32:12.286 "data_offset": 2048, 00:32:12.286 "data_size": 63488 00:32:12.286 }, 00:32:12.286 { 00:32:12.286 "name": null, 00:32:12.286 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:12.286 "is_configured": false, 00:32:12.286 "data_offset": 0, 00:32:12.286 "data_size": 63488 00:32:12.286 }, 00:32:12.286 { 00:32:12.286 "name": "BaseBdev3", 00:32:12.286 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:12.286 "is_configured": true, 00:32:12.286 "data_offset": 2048, 00:32:12.286 "data_size": 63488 00:32:12.286 }, 00:32:12.286 { 00:32:12.286 "name": "BaseBdev4", 00:32:12.286 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:12.286 "is_configured": true, 00:32:12.286 "data_offset": 2048, 00:32:12.286 "data_size": 63488 00:32:12.286 } 00:32:12.286 ] 00:32:12.286 }' 00:32:12.286 14:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:12.286 14:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.545 [2024-10-09 14:01:19.047433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:12.545 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.546 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:12.546 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:12.546 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:12.546 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:12.804 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:12.804 "name": "Existed_Raid", 00:32:12.804 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:12.804 "strip_size_kb": 64, 00:32:12.804 "state": "configuring", 00:32:12.804 "raid_level": "raid0", 00:32:12.804 "superblock": true, 00:32:12.804 "num_base_bdevs": 4, 00:32:12.804 "num_base_bdevs_discovered": 2, 00:32:12.804 "num_base_bdevs_operational": 4, 00:32:12.804 "base_bdevs_list": [ 00:32:12.804 { 00:32:12.804 "name": null, 00:32:12.804 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:12.805 "is_configured": false, 00:32:12.805 "data_offset": 0, 00:32:12.805 "data_size": 63488 00:32:12.805 }, 00:32:12.805 { 00:32:12.805 "name": null, 00:32:12.805 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:12.805 "is_configured": false, 00:32:12.805 "data_offset": 0, 00:32:12.805 "data_size": 63488 00:32:12.805 }, 00:32:12.805 { 00:32:12.805 "name": "BaseBdev3", 00:32:12.805 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:12.805 "is_configured": true, 00:32:12.805 "data_offset": 2048, 00:32:12.805 "data_size": 63488 00:32:12.805 }, 00:32:12.805 { 00:32:12.805 "name": "BaseBdev4", 00:32:12.805 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:12.805 "is_configured": true, 00:32:12.805 "data_offset": 2048, 00:32:12.805 "data_size": 63488 00:32:12.805 } 00:32:12.805 ] 00:32:12.805 }' 00:32:12.805 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:12.805 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.064 [2024-10-09 14:01:19.546206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:13.064 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.065 "name": "Existed_Raid", 00:32:13.065 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:13.065 "strip_size_kb": 64, 00:32:13.065 "state": "configuring", 00:32:13.065 "raid_level": "raid0", 00:32:13.065 "superblock": true, 00:32:13.065 "num_base_bdevs": 4, 00:32:13.065 "num_base_bdevs_discovered": 3, 00:32:13.065 "num_base_bdevs_operational": 4, 00:32:13.065 "base_bdevs_list": [ 00:32:13.065 { 00:32:13.065 "name": null, 00:32:13.065 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:13.065 "is_configured": false, 00:32:13.065 "data_offset": 0, 00:32:13.065 "data_size": 63488 00:32:13.065 }, 00:32:13.065 { 00:32:13.065 "name": "BaseBdev2", 00:32:13.065 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:13.065 "is_configured": true, 00:32:13.065 "data_offset": 2048, 00:32:13.065 "data_size": 63488 00:32:13.065 }, 00:32:13.065 { 00:32:13.065 "name": "BaseBdev3", 00:32:13.065 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:13.065 "is_configured": true, 00:32:13.065 "data_offset": 2048, 00:32:13.065 "data_size": 63488 00:32:13.065 }, 00:32:13.065 { 00:32:13.065 "name": "BaseBdev4", 00:32:13.065 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:13.065 "is_configured": true, 00:32:13.065 "data_offset": 2048, 00:32:13.065 "data_size": 63488 00:32:13.065 } 00:32:13.065 ] 00:32:13.065 }' 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.065 14:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 14:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 403047b9-ada6-4b22-952a-e47069b540c6 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 [2024-10-09 14:01:20.109362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:13.633 [2024-10-09 14:01:20.109536] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:32:13.633 [2024-10-09 14:01:20.109562] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:13.633 [2024-10-09 14:01:20.109840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:13.633 [2024-10-09 14:01:20.109949] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:32:13.633 [2024-10-09 14:01:20.109964] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:32:13.633 [2024-10-09 14:01:20.110055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:13.633 NewBaseBdev 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.633 [ 00:32:13.633 { 00:32:13.633 "name": "NewBaseBdev", 00:32:13.633 "aliases": [ 00:32:13.633 "403047b9-ada6-4b22-952a-e47069b540c6" 00:32:13.633 ], 00:32:13.633 "product_name": "Malloc disk", 00:32:13.633 "block_size": 512, 00:32:13.633 "num_blocks": 65536, 00:32:13.633 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:13.633 "assigned_rate_limits": { 00:32:13.633 "rw_ios_per_sec": 0, 00:32:13.633 "rw_mbytes_per_sec": 0, 00:32:13.633 "r_mbytes_per_sec": 0, 00:32:13.633 "w_mbytes_per_sec": 0 00:32:13.633 }, 00:32:13.633 "claimed": true, 00:32:13.633 "claim_type": "exclusive_write", 00:32:13.633 "zoned": false, 00:32:13.633 "supported_io_types": { 00:32:13.633 "read": true, 00:32:13.633 "write": true, 00:32:13.633 "unmap": true, 00:32:13.633 "flush": true, 00:32:13.633 "reset": true, 00:32:13.633 "nvme_admin": false, 00:32:13.633 "nvme_io": false, 00:32:13.633 "nvme_io_md": false, 00:32:13.633 "write_zeroes": true, 00:32:13.633 "zcopy": true, 00:32:13.633 "get_zone_info": false, 00:32:13.633 "zone_management": false, 00:32:13.633 "zone_append": false, 00:32:13.633 "compare": false, 00:32:13.633 "compare_and_write": false, 00:32:13.633 "abort": true, 00:32:13.633 "seek_hole": false, 00:32:13.633 "seek_data": false, 00:32:13.633 "copy": true, 00:32:13.633 "nvme_iov_md": false 00:32:13.633 }, 00:32:13.633 "memory_domains": [ 00:32:13.633 { 00:32:13.633 "dma_device_id": "system", 00:32:13.633 "dma_device_type": 1 00:32:13.633 }, 00:32:13.633 { 00:32:13.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:13.633 "dma_device_type": 2 00:32:13.633 } 00:32:13.633 ], 00:32:13.633 "driver_specific": {} 00:32:13.633 } 00:32:13.633 ] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.633 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:13.634 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:13.634 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.893 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:13.893 "name": "Existed_Raid", 00:32:13.893 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:13.893 "strip_size_kb": 64, 00:32:13.893 "state": "online", 00:32:13.893 "raid_level": "raid0", 00:32:13.893 "superblock": true, 00:32:13.893 "num_base_bdevs": 4, 00:32:13.893 "num_base_bdevs_discovered": 4, 00:32:13.893 "num_base_bdevs_operational": 4, 00:32:13.893 "base_bdevs_list": [ 00:32:13.893 { 00:32:13.893 "name": "NewBaseBdev", 00:32:13.893 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:13.893 "is_configured": true, 00:32:13.893 "data_offset": 2048, 00:32:13.893 "data_size": 63488 00:32:13.893 }, 00:32:13.893 { 00:32:13.893 "name": "BaseBdev2", 00:32:13.893 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:13.893 "is_configured": true, 00:32:13.893 "data_offset": 2048, 00:32:13.893 "data_size": 63488 00:32:13.893 }, 00:32:13.893 { 00:32:13.893 "name": "BaseBdev3", 00:32:13.893 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:13.893 "is_configured": true, 00:32:13.893 "data_offset": 2048, 00:32:13.893 "data_size": 63488 00:32:13.893 }, 00:32:13.893 { 00:32:13.893 "name": "BaseBdev4", 00:32:13.893 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:13.893 "is_configured": true, 00:32:13.893 "data_offset": 2048, 00:32:13.893 "data_size": 63488 00:32:13.893 } 00:32:13.893 ] 00:32:13.893 }' 00:32:13.893 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:13.893 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.152 [2024-10-09 14:01:20.609905] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.152 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:14.152 "name": "Existed_Raid", 00:32:14.152 "aliases": [ 00:32:14.152 "177a1866-eeaa-45cb-b15f-ad321c8b6feb" 00:32:14.152 ], 00:32:14.152 "product_name": "Raid Volume", 00:32:14.152 "block_size": 512, 00:32:14.152 "num_blocks": 253952, 00:32:14.152 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:14.152 "assigned_rate_limits": { 00:32:14.152 "rw_ios_per_sec": 0, 00:32:14.152 "rw_mbytes_per_sec": 0, 00:32:14.152 "r_mbytes_per_sec": 0, 00:32:14.152 "w_mbytes_per_sec": 0 00:32:14.152 }, 00:32:14.152 "claimed": false, 00:32:14.152 "zoned": false, 00:32:14.152 "supported_io_types": { 00:32:14.152 "read": true, 00:32:14.152 "write": true, 00:32:14.152 "unmap": true, 00:32:14.152 "flush": true, 00:32:14.152 "reset": true, 00:32:14.152 "nvme_admin": false, 00:32:14.152 "nvme_io": false, 00:32:14.152 "nvme_io_md": false, 00:32:14.152 "write_zeroes": true, 00:32:14.152 "zcopy": false, 00:32:14.152 "get_zone_info": false, 00:32:14.152 "zone_management": false, 00:32:14.152 "zone_append": false, 00:32:14.152 "compare": false, 00:32:14.152 "compare_and_write": false, 00:32:14.152 "abort": false, 00:32:14.152 "seek_hole": false, 00:32:14.152 "seek_data": false, 00:32:14.152 "copy": false, 00:32:14.152 "nvme_iov_md": false 00:32:14.152 }, 00:32:14.152 "memory_domains": [ 00:32:14.152 { 00:32:14.152 "dma_device_id": "system", 00:32:14.152 "dma_device_type": 1 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.152 "dma_device_type": 2 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "system", 00:32:14.152 "dma_device_type": 1 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.152 "dma_device_type": 2 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "system", 00:32:14.152 "dma_device_type": 1 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.152 "dma_device_type": 2 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "system", 00:32:14.152 "dma_device_type": 1 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.152 "dma_device_type": 2 00:32:14.152 } 00:32:14.152 ], 00:32:14.152 "driver_specific": { 00:32:14.152 "raid": { 00:32:14.152 "uuid": "177a1866-eeaa-45cb-b15f-ad321c8b6feb", 00:32:14.152 "strip_size_kb": 64, 00:32:14.152 "state": "online", 00:32:14.152 "raid_level": "raid0", 00:32:14.152 "superblock": true, 00:32:14.152 "num_base_bdevs": 4, 00:32:14.152 "num_base_bdevs_discovered": 4, 00:32:14.152 "num_base_bdevs_operational": 4, 00:32:14.152 "base_bdevs_list": [ 00:32:14.152 { 00:32:14.152 "name": "NewBaseBdev", 00:32:14.152 "uuid": "403047b9-ada6-4b22-952a-e47069b540c6", 00:32:14.152 "is_configured": true, 00:32:14.152 "data_offset": 2048, 00:32:14.152 "data_size": 63488 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "name": "BaseBdev2", 00:32:14.152 "uuid": "ce4b28f7-e49d-4f86-ad6b-2f2f469a6681", 00:32:14.152 "is_configured": true, 00:32:14.152 "data_offset": 2048, 00:32:14.152 "data_size": 63488 00:32:14.152 }, 00:32:14.152 { 00:32:14.152 "name": "BaseBdev3", 00:32:14.153 "uuid": "ffdadb04-ce7c-4f12-a928-a80c8277add1", 00:32:14.153 "is_configured": true, 00:32:14.153 "data_offset": 2048, 00:32:14.153 "data_size": 63488 00:32:14.153 }, 00:32:14.153 { 00:32:14.153 "name": "BaseBdev4", 00:32:14.153 "uuid": "9e5a0b39-e051-4b0e-9fe3-080d0a86840b", 00:32:14.153 "is_configured": true, 00:32:14.153 "data_offset": 2048, 00:32:14.153 "data_size": 63488 00:32:14.153 } 00:32:14.153 ] 00:32:14.153 } 00:32:14.153 } 00:32:14.153 }' 00:32:14.153 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:14.153 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:14.153 BaseBdev2 00:32:14.153 BaseBdev3 00:32:14.153 BaseBdev4' 00:32:14.153 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.412 [2024-10-09 14:01:20.933583] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:14.412 [2024-10-09 14:01:20.933618] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:14.412 [2024-10-09 14:01:20.933694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:14.412 [2024-10-09 14:01:20.933762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:14.412 [2024-10-09 14:01:20.933775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81353 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81353 ']' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81353 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.412 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81353 00:32:14.671 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:14.671 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:14.671 killing process with pid 81353 00:32:14.671 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81353' 00:32:14.671 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81353 00:32:14.671 [2024-10-09 14:01:20.979498] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:14.671 14:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81353 00:32:14.671 [2024-10-09 14:01:21.019948] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:14.930 14:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:14.930 00:32:14.930 real 0m9.689s 00:32:14.930 user 0m16.709s 00:32:14.930 sys 0m2.120s 00:32:14.930 14:01:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.930 14:01:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.930 ************************************ 00:32:14.930 END TEST raid_state_function_test_sb 00:32:14.930 ************************************ 00:32:14.930 14:01:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:32:14.930 14:01:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:14.930 14:01:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:14.930 14:01:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:14.930 ************************************ 00:32:14.930 START TEST raid_superblock_test 00:32:14.930 ************************************ 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82007 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82007 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 82007 ']' 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:14.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:14.930 14:01:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:14.930 [2024-10-09 14:01:21.420603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:14.930 [2024-10-09 14:01:21.420756] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82007 ] 00:32:15.189 [2024-10-09 14:01:21.577280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.189 [2024-10-09 14:01:21.628491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.189 [2024-10-09 14:01:21.673405] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:15.189 [2024-10-09 14:01:21.673454] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 malloc1 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 [2024-10-09 14:01:22.350956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:16.126 [2024-10-09 14:01:22.351040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.126 [2024-10-09 14:01:22.351064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:16.126 [2024-10-09 14:01:22.351082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.126 [2024-10-09 14:01:22.353648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.126 [2024-10-09 14:01:22.353706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:16.126 pt1 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 malloc2 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 [2024-10-09 14:01:22.388634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:16.126 [2024-10-09 14:01:22.388702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.126 [2024-10-09 14:01:22.388727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:16.126 [2024-10-09 14:01:22.388746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.126 [2024-10-09 14:01:22.392064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.126 [2024-10-09 14:01:22.392118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:16.126 pt2 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 malloc3 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 [2024-10-09 14:01:22.413748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:16.126 [2024-10-09 14:01:22.413799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.126 [2024-10-09 14:01:22.413820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:16.126 [2024-10-09 14:01:22.413834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.126 [2024-10-09 14:01:22.416305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.126 [2024-10-09 14:01:22.416345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:16.126 pt3 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.126 malloc4 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.126 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.127 [2024-10-09 14:01:22.438918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:16.127 [2024-10-09 14:01:22.438971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.127 [2024-10-09 14:01:22.438989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:16.127 [2024-10-09 14:01:22.439006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.127 [2024-10-09 14:01:22.441405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.127 [2024-10-09 14:01:22.441446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:16.127 pt4 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.127 [2024-10-09 14:01:22.451017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:16.127 [2024-10-09 14:01:22.453176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:16.127 [2024-10-09 14:01:22.453240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:16.127 [2024-10-09 14:01:22.453305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:16.127 [2024-10-09 14:01:22.453450] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:32:16.127 [2024-10-09 14:01:22.453473] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:16.127 [2024-10-09 14:01:22.453787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:16.127 [2024-10-09 14:01:22.453956] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:32:16.127 [2024-10-09 14:01:22.453974] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:32:16.127 [2024-10-09 14:01:22.454090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:16.127 "name": "raid_bdev1", 00:32:16.127 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:16.127 "strip_size_kb": 64, 00:32:16.127 "state": "online", 00:32:16.127 "raid_level": "raid0", 00:32:16.127 "superblock": true, 00:32:16.127 "num_base_bdevs": 4, 00:32:16.127 "num_base_bdevs_discovered": 4, 00:32:16.127 "num_base_bdevs_operational": 4, 00:32:16.127 "base_bdevs_list": [ 00:32:16.127 { 00:32:16.127 "name": "pt1", 00:32:16.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:16.127 "is_configured": true, 00:32:16.127 "data_offset": 2048, 00:32:16.127 "data_size": 63488 00:32:16.127 }, 00:32:16.127 { 00:32:16.127 "name": "pt2", 00:32:16.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:16.127 "is_configured": true, 00:32:16.127 "data_offset": 2048, 00:32:16.127 "data_size": 63488 00:32:16.127 }, 00:32:16.127 { 00:32:16.127 "name": "pt3", 00:32:16.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:16.127 "is_configured": true, 00:32:16.127 "data_offset": 2048, 00:32:16.127 "data_size": 63488 00:32:16.127 }, 00:32:16.127 { 00:32:16.127 "name": "pt4", 00:32:16.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:16.127 "is_configured": true, 00:32:16.127 "data_offset": 2048, 00:32:16.127 "data_size": 63488 00:32:16.127 } 00:32:16.127 ] 00:32:16.127 }' 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:16.127 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:16.387 [2024-10-09 14:01:22.887382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:16.387 "name": "raid_bdev1", 00:32:16.387 "aliases": [ 00:32:16.387 "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f" 00:32:16.387 ], 00:32:16.387 "product_name": "Raid Volume", 00:32:16.387 "block_size": 512, 00:32:16.387 "num_blocks": 253952, 00:32:16.387 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:16.387 "assigned_rate_limits": { 00:32:16.387 "rw_ios_per_sec": 0, 00:32:16.387 "rw_mbytes_per_sec": 0, 00:32:16.387 "r_mbytes_per_sec": 0, 00:32:16.387 "w_mbytes_per_sec": 0 00:32:16.387 }, 00:32:16.387 "claimed": false, 00:32:16.387 "zoned": false, 00:32:16.387 "supported_io_types": { 00:32:16.387 "read": true, 00:32:16.387 "write": true, 00:32:16.387 "unmap": true, 00:32:16.387 "flush": true, 00:32:16.387 "reset": true, 00:32:16.387 "nvme_admin": false, 00:32:16.387 "nvme_io": false, 00:32:16.387 "nvme_io_md": false, 00:32:16.387 "write_zeroes": true, 00:32:16.387 "zcopy": false, 00:32:16.387 "get_zone_info": false, 00:32:16.387 "zone_management": false, 00:32:16.387 "zone_append": false, 00:32:16.387 "compare": false, 00:32:16.387 "compare_and_write": false, 00:32:16.387 "abort": false, 00:32:16.387 "seek_hole": false, 00:32:16.387 "seek_data": false, 00:32:16.387 "copy": false, 00:32:16.387 "nvme_iov_md": false 00:32:16.387 }, 00:32:16.387 "memory_domains": [ 00:32:16.387 { 00:32:16.387 "dma_device_id": "system", 00:32:16.387 "dma_device_type": 1 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.387 "dma_device_type": 2 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "system", 00:32:16.387 "dma_device_type": 1 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.387 "dma_device_type": 2 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "system", 00:32:16.387 "dma_device_type": 1 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.387 "dma_device_type": 2 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "system", 00:32:16.387 "dma_device_type": 1 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.387 "dma_device_type": 2 00:32:16.387 } 00:32:16.387 ], 00:32:16.387 "driver_specific": { 00:32:16.387 "raid": { 00:32:16.387 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:16.387 "strip_size_kb": 64, 00:32:16.387 "state": "online", 00:32:16.387 "raid_level": "raid0", 00:32:16.387 "superblock": true, 00:32:16.387 "num_base_bdevs": 4, 00:32:16.387 "num_base_bdevs_discovered": 4, 00:32:16.387 "num_base_bdevs_operational": 4, 00:32:16.387 "base_bdevs_list": [ 00:32:16.387 { 00:32:16.387 "name": "pt1", 00:32:16.387 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:16.387 "is_configured": true, 00:32:16.387 "data_offset": 2048, 00:32:16.387 "data_size": 63488 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "name": "pt2", 00:32:16.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:16.387 "is_configured": true, 00:32:16.387 "data_offset": 2048, 00:32:16.387 "data_size": 63488 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "name": "pt3", 00:32:16.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:16.387 "is_configured": true, 00:32:16.387 "data_offset": 2048, 00:32:16.387 "data_size": 63488 00:32:16.387 }, 00:32:16.387 { 00:32:16.387 "name": "pt4", 00:32:16.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:16.387 "is_configured": true, 00:32:16.387 "data_offset": 2048, 00:32:16.387 "data_size": 63488 00:32:16.387 } 00:32:16.387 ] 00:32:16.387 } 00:32:16.387 } 00:32:16.387 }' 00:32:16.387 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:16.647 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:16.647 pt2 00:32:16.647 pt3 00:32:16.647 pt4' 00:32:16.647 14:01:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.647 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.647 [2024-10-09 14:01:23.179709] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f ']' 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 [2024-10-09 14:01:23.215409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:16.907 [2024-10-09 14:01:23.215448] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:16.907 [2024-10-09 14:01:23.215512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:16.907 [2024-10-09 14:01:23.215593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:16.907 [2024-10-09 14:01:23.215605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.907 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 [2024-10-09 14:01:23.355472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:16.908 [2024-10-09 14:01:23.357726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:16.908 [2024-10-09 14:01:23.357783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:16.908 [2024-10-09 14:01:23.357814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:16.908 [2024-10-09 14:01:23.357864] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:16.908 [2024-10-09 14:01:23.357907] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:16.908 [2024-10-09 14:01:23.357929] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:16.908 [2024-10-09 14:01:23.357948] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:16.908 [2024-10-09 14:01:23.357965] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:16.908 [2024-10-09 14:01:23.357975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:32:16.908 request: 00:32:16.908 { 00:32:16.908 "name": "raid_bdev1", 00:32:16.908 "raid_level": "raid0", 00:32:16.908 "base_bdevs": [ 00:32:16.908 "malloc1", 00:32:16.908 "malloc2", 00:32:16.908 "malloc3", 00:32:16.908 "malloc4" 00:32:16.908 ], 00:32:16.908 "strip_size_kb": 64, 00:32:16.908 "superblock": false, 00:32:16.908 "method": "bdev_raid_create", 00:32:16.908 "req_id": 1 00:32:16.908 } 00:32:16.908 Got JSON-RPC error response 00:32:16.908 response: 00:32:16.908 { 00:32:16.908 "code": -17, 00:32:16.908 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:16.908 } 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 [2024-10-09 14:01:23.415439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:16.908 [2024-10-09 14:01:23.415485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:16.908 [2024-10-09 14:01:23.415510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:16.908 [2024-10-09 14:01:23.415521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:16.908 [2024-10-09 14:01:23.418016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:16.908 [2024-10-09 14:01:23.418052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:16.908 [2024-10-09 14:01:23.418118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:16.908 [2024-10-09 14:01:23.418159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:16.908 pt1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:16.908 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:17.168 "name": "raid_bdev1", 00:32:17.168 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:17.168 "strip_size_kb": 64, 00:32:17.168 "state": "configuring", 00:32:17.168 "raid_level": "raid0", 00:32:17.168 "superblock": true, 00:32:17.168 "num_base_bdevs": 4, 00:32:17.168 "num_base_bdevs_discovered": 1, 00:32:17.168 "num_base_bdevs_operational": 4, 00:32:17.168 "base_bdevs_list": [ 00:32:17.168 { 00:32:17.168 "name": "pt1", 00:32:17.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.168 "is_configured": true, 00:32:17.168 "data_offset": 2048, 00:32:17.168 "data_size": 63488 00:32:17.168 }, 00:32:17.168 { 00:32:17.168 "name": null, 00:32:17.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.168 "is_configured": false, 00:32:17.168 "data_offset": 2048, 00:32:17.168 "data_size": 63488 00:32:17.168 }, 00:32:17.168 { 00:32:17.168 "name": null, 00:32:17.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:17.168 "is_configured": false, 00:32:17.168 "data_offset": 2048, 00:32:17.168 "data_size": 63488 00:32:17.168 }, 00:32:17.168 { 00:32:17.168 "name": null, 00:32:17.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:17.168 "is_configured": false, 00:32:17.168 "data_offset": 2048, 00:32:17.168 "data_size": 63488 00:32:17.168 } 00:32:17.168 ] 00:32:17.168 }' 00:32:17.168 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:17.168 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.427 [2024-10-09 14:01:23.891584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:17.427 [2024-10-09 14:01:23.891651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.427 [2024-10-09 14:01:23.891676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:17.427 [2024-10-09 14:01:23.891688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.427 [2024-10-09 14:01:23.892112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.427 [2024-10-09 14:01:23.892139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:17.427 [2024-10-09 14:01:23.892220] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:17.427 [2024-10-09 14:01:23.892243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:17.427 pt2 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.427 [2024-10-09 14:01:23.899586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:17.427 "name": "raid_bdev1", 00:32:17.427 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:17.427 "strip_size_kb": 64, 00:32:17.427 "state": "configuring", 00:32:17.427 "raid_level": "raid0", 00:32:17.427 "superblock": true, 00:32:17.427 "num_base_bdevs": 4, 00:32:17.427 "num_base_bdevs_discovered": 1, 00:32:17.427 "num_base_bdevs_operational": 4, 00:32:17.427 "base_bdevs_list": [ 00:32:17.427 { 00:32:17.427 "name": "pt1", 00:32:17.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.427 "is_configured": true, 00:32:17.427 "data_offset": 2048, 00:32:17.427 "data_size": 63488 00:32:17.427 }, 00:32:17.427 { 00:32:17.427 "name": null, 00:32:17.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.427 "is_configured": false, 00:32:17.427 "data_offset": 0, 00:32:17.427 "data_size": 63488 00:32:17.427 }, 00:32:17.427 { 00:32:17.427 "name": null, 00:32:17.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:17.427 "is_configured": false, 00:32:17.427 "data_offset": 2048, 00:32:17.427 "data_size": 63488 00:32:17.427 }, 00:32:17.427 { 00:32:17.427 "name": null, 00:32:17.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:17.427 "is_configured": false, 00:32:17.427 "data_offset": 2048, 00:32:17.427 "data_size": 63488 00:32:17.427 } 00:32:17.427 ] 00:32:17.427 }' 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:17.427 14:01:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.995 [2024-10-09 14:01:24.327660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:17.995 [2024-10-09 14:01:24.327724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.995 [2024-10-09 14:01:24.327746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:17.995 [2024-10-09 14:01:24.327760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.995 [2024-10-09 14:01:24.328160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.995 [2024-10-09 14:01:24.328195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:17.995 [2024-10-09 14:01:24.328267] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:17.995 [2024-10-09 14:01:24.328294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:17.995 pt2 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.995 [2024-10-09 14:01:24.335620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:17.995 [2024-10-09 14:01:24.335678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.995 [2024-10-09 14:01:24.335696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:17.995 [2024-10-09 14:01:24.335709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.995 [2024-10-09 14:01:24.336046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.995 [2024-10-09 14:01:24.336080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:17.995 [2024-10-09 14:01:24.336138] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:17.995 [2024-10-09 14:01:24.336160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:17.995 pt3 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.995 [2024-10-09 14:01:24.343638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:17.995 [2024-10-09 14:01:24.343693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.995 [2024-10-09 14:01:24.343719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:17.995 [2024-10-09 14:01:24.343734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.995 [2024-10-09 14:01:24.344052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.995 [2024-10-09 14:01:24.344080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:17.995 [2024-10-09 14:01:24.344133] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:17.995 [2024-10-09 14:01:24.344156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:17.995 [2024-10-09 14:01:24.344248] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:32:17.995 [2024-10-09 14:01:24.344270] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:17.995 [2024-10-09 14:01:24.344509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:17.995 [2024-10-09 14:01:24.344640] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:32:17.995 [2024-10-09 14:01:24.344657] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:32:17.995 [2024-10-09 14:01:24.344757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.995 pt4 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:17.995 "name": "raid_bdev1", 00:32:17.995 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:17.995 "strip_size_kb": 64, 00:32:17.995 "state": "online", 00:32:17.995 "raid_level": "raid0", 00:32:17.995 "superblock": true, 00:32:17.995 "num_base_bdevs": 4, 00:32:17.995 "num_base_bdevs_discovered": 4, 00:32:17.995 "num_base_bdevs_operational": 4, 00:32:17.995 "base_bdevs_list": [ 00:32:17.995 { 00:32:17.995 "name": "pt1", 00:32:17.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.995 "is_configured": true, 00:32:17.995 "data_offset": 2048, 00:32:17.995 "data_size": 63488 00:32:17.995 }, 00:32:17.995 { 00:32:17.995 "name": "pt2", 00:32:17.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.995 "is_configured": true, 00:32:17.995 "data_offset": 2048, 00:32:17.995 "data_size": 63488 00:32:17.995 }, 00:32:17.995 { 00:32:17.995 "name": "pt3", 00:32:17.995 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:17.995 "is_configured": true, 00:32:17.995 "data_offset": 2048, 00:32:17.995 "data_size": 63488 00:32:17.995 }, 00:32:17.995 { 00:32:17.995 "name": "pt4", 00:32:17.995 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:17.995 "is_configured": true, 00:32:17.995 "data_offset": 2048, 00:32:17.995 "data_size": 63488 00:32:17.995 } 00:32:17.995 ] 00:32:17.995 }' 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:17.995 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.254 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.255 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.255 [2024-10-09 14:01:24.796096] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:18.514 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.514 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:18.514 "name": "raid_bdev1", 00:32:18.514 "aliases": [ 00:32:18.514 "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f" 00:32:18.514 ], 00:32:18.514 "product_name": "Raid Volume", 00:32:18.514 "block_size": 512, 00:32:18.514 "num_blocks": 253952, 00:32:18.514 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:18.514 "assigned_rate_limits": { 00:32:18.514 "rw_ios_per_sec": 0, 00:32:18.514 "rw_mbytes_per_sec": 0, 00:32:18.514 "r_mbytes_per_sec": 0, 00:32:18.514 "w_mbytes_per_sec": 0 00:32:18.514 }, 00:32:18.514 "claimed": false, 00:32:18.514 "zoned": false, 00:32:18.514 "supported_io_types": { 00:32:18.514 "read": true, 00:32:18.514 "write": true, 00:32:18.514 "unmap": true, 00:32:18.514 "flush": true, 00:32:18.514 "reset": true, 00:32:18.514 "nvme_admin": false, 00:32:18.514 "nvme_io": false, 00:32:18.514 "nvme_io_md": false, 00:32:18.514 "write_zeroes": true, 00:32:18.514 "zcopy": false, 00:32:18.514 "get_zone_info": false, 00:32:18.514 "zone_management": false, 00:32:18.514 "zone_append": false, 00:32:18.514 "compare": false, 00:32:18.514 "compare_and_write": false, 00:32:18.514 "abort": false, 00:32:18.514 "seek_hole": false, 00:32:18.514 "seek_data": false, 00:32:18.514 "copy": false, 00:32:18.514 "nvme_iov_md": false 00:32:18.514 }, 00:32:18.514 "memory_domains": [ 00:32:18.514 { 00:32:18.514 "dma_device_id": "system", 00:32:18.514 "dma_device_type": 1 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.514 "dma_device_type": 2 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "system", 00:32:18.514 "dma_device_type": 1 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.514 "dma_device_type": 2 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "system", 00:32:18.514 "dma_device_type": 1 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.514 "dma_device_type": 2 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "system", 00:32:18.514 "dma_device_type": 1 00:32:18.514 }, 00:32:18.514 { 00:32:18.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.514 "dma_device_type": 2 00:32:18.514 } 00:32:18.514 ], 00:32:18.514 "driver_specific": { 00:32:18.514 "raid": { 00:32:18.514 "uuid": "06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f", 00:32:18.514 "strip_size_kb": 64, 00:32:18.514 "state": "online", 00:32:18.514 "raid_level": "raid0", 00:32:18.514 "superblock": true, 00:32:18.515 "num_base_bdevs": 4, 00:32:18.515 "num_base_bdevs_discovered": 4, 00:32:18.515 "num_base_bdevs_operational": 4, 00:32:18.515 "base_bdevs_list": [ 00:32:18.515 { 00:32:18.515 "name": "pt1", 00:32:18.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:18.515 "is_configured": true, 00:32:18.515 "data_offset": 2048, 00:32:18.515 "data_size": 63488 00:32:18.515 }, 00:32:18.515 { 00:32:18.515 "name": "pt2", 00:32:18.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:18.515 "is_configured": true, 00:32:18.515 "data_offset": 2048, 00:32:18.515 "data_size": 63488 00:32:18.515 }, 00:32:18.515 { 00:32:18.515 "name": "pt3", 00:32:18.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:18.515 "is_configured": true, 00:32:18.515 "data_offset": 2048, 00:32:18.515 "data_size": 63488 00:32:18.515 }, 00:32:18.515 { 00:32:18.515 "name": "pt4", 00:32:18.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:18.515 "is_configured": true, 00:32:18.515 "data_offset": 2048, 00:32:18.515 "data_size": 63488 00:32:18.515 } 00:32:18.515 ] 00:32:18.515 } 00:32:18.515 } 00:32:18.515 }' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:18.515 pt2 00:32:18.515 pt3 00:32:18.515 pt4' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.515 14:01:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.515 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.774 [2024-10-09 14:01:25.136169] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f '!=' 06402a9d-dcb2-4bcd-a7dc-512d0a04ca8f ']' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82007 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 82007 ']' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 82007 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82007 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:18.774 killing process with pid 82007 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82007' 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 82007 00:32:18.774 [2024-10-09 14:01:25.209537] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:18.774 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 82007 00:32:18.774 [2024-10-09 14:01:25.209648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.774 [2024-10-09 14:01:25.209748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:18.774 [2024-10-09 14:01:25.209764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:32:18.774 [2024-10-09 14:01:25.257218] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:19.034 14:01:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:19.034 00:32:19.034 real 0m4.164s 00:32:19.034 user 0m6.666s 00:32:19.034 sys 0m0.959s 00:32:19.034 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:19.034 14:01:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.034 ************************************ 00:32:19.034 END TEST raid_superblock_test 00:32:19.034 ************************************ 00:32:19.034 14:01:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:32:19.034 14:01:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:19.034 14:01:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:19.034 14:01:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.034 ************************************ 00:32:19.034 START TEST raid_read_error_test 00:32:19.034 ************************************ 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:19.034 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HGd9j0t6Q1 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82255 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82255 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82255 ']' 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.293 14:01:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.293 [2024-10-09 14:01:25.694311] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:19.293 [2024-10-09 14:01:25.694518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82255 ] 00:32:19.551 [2024-10-09 14:01:25.873532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.551 [2024-10-09 14:01:25.920583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.551 [2024-10-09 14:01:25.964669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:19.551 [2024-10-09 14:01:25.964709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.118 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 BaseBdev1_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 true 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 [2024-10-09 14:01:26.690195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:20.377 [2024-10-09 14:01:26.690261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.377 [2024-10-09 14:01:26.690287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:20.377 [2024-10-09 14:01:26.690301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.377 [2024-10-09 14:01:26.692944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.377 [2024-10-09 14:01:26.692988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:20.377 BaseBdev1 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 BaseBdev2_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 true 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 [2024-10-09 14:01:26.732539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:20.377 [2024-10-09 14:01:26.732603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.377 [2024-10-09 14:01:26.732625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:20.377 [2024-10-09 14:01:26.732636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.377 [2024-10-09 14:01:26.735153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.377 [2024-10-09 14:01:26.735193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:20.377 BaseBdev2 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 BaseBdev3_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 true 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 [2024-10-09 14:01:26.766189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:20.377 [2024-10-09 14:01:26.766237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.377 [2024-10-09 14:01:26.766261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:20.377 [2024-10-09 14:01:26.766272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.377 [2024-10-09 14:01:26.768928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.377 [2024-10-09 14:01:26.768970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:20.377 BaseBdev3 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 BaseBdev4_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.377 true 00:32:20.377 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.378 [2024-10-09 14:01:26.800009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:20.378 [2024-10-09 14:01:26.800060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.378 [2024-10-09 14:01:26.800085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:20.378 [2024-10-09 14:01:26.800097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.378 [2024-10-09 14:01:26.802580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.378 [2024-10-09 14:01:26.802619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:20.378 BaseBdev4 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.378 [2024-10-09 14:01:26.812073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:20.378 [2024-10-09 14:01:26.814258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:20.378 [2024-10-09 14:01:26.814351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:20.378 [2024-10-09 14:01:26.814405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:20.378 [2024-10-09 14:01:26.814624] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:32:20.378 [2024-10-09 14:01:26.814644] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:20.378 [2024-10-09 14:01:26.814913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:20.378 [2024-10-09 14:01:26.815052] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:32:20.378 [2024-10-09 14:01:26.815072] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:32:20.378 [2024-10-09 14:01:26.815192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.378 "name": "raid_bdev1", 00:32:20.378 "uuid": "f07310e9-4b4b-491f-b536-dc016370db5e", 00:32:20.378 "strip_size_kb": 64, 00:32:20.378 "state": "online", 00:32:20.378 "raid_level": "raid0", 00:32:20.378 "superblock": true, 00:32:20.378 "num_base_bdevs": 4, 00:32:20.378 "num_base_bdevs_discovered": 4, 00:32:20.378 "num_base_bdevs_operational": 4, 00:32:20.378 "base_bdevs_list": [ 00:32:20.378 { 00:32:20.378 "name": "BaseBdev1", 00:32:20.378 "uuid": "d7b18814-e835-5578-aa2e-bc05b6d06e3e", 00:32:20.378 "is_configured": true, 00:32:20.378 "data_offset": 2048, 00:32:20.378 "data_size": 63488 00:32:20.378 }, 00:32:20.378 { 00:32:20.378 "name": "BaseBdev2", 00:32:20.378 "uuid": "e970c638-6f53-5455-bd56-060f7e73385e", 00:32:20.378 "is_configured": true, 00:32:20.378 "data_offset": 2048, 00:32:20.378 "data_size": 63488 00:32:20.378 }, 00:32:20.378 { 00:32:20.378 "name": "BaseBdev3", 00:32:20.378 "uuid": "5502dc26-777c-53a3-80a6-4547b972c8dd", 00:32:20.378 "is_configured": true, 00:32:20.378 "data_offset": 2048, 00:32:20.378 "data_size": 63488 00:32:20.378 }, 00:32:20.378 { 00:32:20.378 "name": "BaseBdev4", 00:32:20.378 "uuid": "4777c75a-6cda-5dcd-9659-9812e115567c", 00:32:20.378 "is_configured": true, 00:32:20.378 "data_offset": 2048, 00:32:20.378 "data_size": 63488 00:32:20.378 } 00:32:20.378 ] 00:32:20.378 }' 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.378 14:01:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:20.992 14:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:20.992 14:01:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:20.992 [2024-10-09 14:01:27.380564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:21.948 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:21.948 "name": "raid_bdev1", 00:32:21.948 "uuid": "f07310e9-4b4b-491f-b536-dc016370db5e", 00:32:21.948 "strip_size_kb": 64, 00:32:21.948 "state": "online", 00:32:21.948 "raid_level": "raid0", 00:32:21.948 "superblock": true, 00:32:21.948 "num_base_bdevs": 4, 00:32:21.948 "num_base_bdevs_discovered": 4, 00:32:21.948 "num_base_bdevs_operational": 4, 00:32:21.948 "base_bdevs_list": [ 00:32:21.948 { 00:32:21.948 "name": "BaseBdev1", 00:32:21.948 "uuid": "d7b18814-e835-5578-aa2e-bc05b6d06e3e", 00:32:21.948 "is_configured": true, 00:32:21.948 "data_offset": 2048, 00:32:21.948 "data_size": 63488 00:32:21.948 }, 00:32:21.948 { 00:32:21.948 "name": "BaseBdev2", 00:32:21.948 "uuid": "e970c638-6f53-5455-bd56-060f7e73385e", 00:32:21.948 "is_configured": true, 00:32:21.948 "data_offset": 2048, 00:32:21.948 "data_size": 63488 00:32:21.948 }, 00:32:21.948 { 00:32:21.948 "name": "BaseBdev3", 00:32:21.948 "uuid": "5502dc26-777c-53a3-80a6-4547b972c8dd", 00:32:21.948 "is_configured": true, 00:32:21.948 "data_offset": 2048, 00:32:21.948 "data_size": 63488 00:32:21.948 }, 00:32:21.948 { 00:32:21.948 "name": "BaseBdev4", 00:32:21.948 "uuid": "4777c75a-6cda-5dcd-9659-9812e115567c", 00:32:21.948 "is_configured": true, 00:32:21.949 "data_offset": 2048, 00:32:21.949 "data_size": 63488 00:32:21.949 } 00:32:21.949 ] 00:32:21.949 }' 00:32:21.949 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:21.949 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 [2024-10-09 14:01:28.711256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:22.207 [2024-10-09 14:01:28.711294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:22.207 [2024-10-09 14:01:28.714100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.207 [2024-10-09 14:01:28.714166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.207 [2024-10-09 14:01:28.714235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:22.207 [2024-10-09 14:01:28.714248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:32:22.207 { 00:32:22.207 "results": [ 00:32:22.207 { 00:32:22.207 "job": "raid_bdev1", 00:32:22.207 "core_mask": "0x1", 00:32:22.207 "workload": "randrw", 00:32:22.207 "percentage": 50, 00:32:22.207 "status": "finished", 00:32:22.207 "queue_depth": 1, 00:32:22.207 "io_size": 131072, 00:32:22.207 "runtime": 1.32835, 00:32:22.207 "iops": 15744.344487522114, 00:32:22.207 "mibps": 1968.0430609402642, 00:32:22.207 "io_failed": 1, 00:32:22.207 "io_timeout": 0, 00:32:22.207 "avg_latency_us": 87.73599663035186, 00:32:22.207 "min_latency_us": 27.30666666666667, 00:32:22.207 "max_latency_us": 1544.777142857143 00:32:22.207 } 00:32:22.207 ], 00:32:22.207 "core_count": 1 00:32:22.207 } 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82255 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82255 ']' 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82255 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.207 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82255 00:32:22.466 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:22.466 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:22.466 killing process with pid 82255 00:32:22.466 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82255' 00:32:22.466 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82255 00:32:22.466 [2024-10-09 14:01:28.758744] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:22.466 14:01:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82255 00:32:22.466 [2024-10-09 14:01:28.795329] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HGd9j0t6Q1 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:32:22.725 00:32:22.725 real 0m3.482s 00:32:22.725 user 0m4.503s 00:32:22.725 sys 0m0.603s 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.725 14:01:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.725 ************************************ 00:32:22.725 END TEST raid_read_error_test 00:32:22.725 ************************************ 00:32:22.725 14:01:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:32:22.725 14:01:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:22.725 14:01:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.725 14:01:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:22.725 ************************************ 00:32:22.725 START TEST raid_write_error_test 00:32:22.725 ************************************ 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:22.725 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L6JGrxaSpy 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82384 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82384 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82384 ']' 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.726 14:01:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:22.726 [2024-10-09 14:01:29.226413] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:22.726 [2024-10-09 14:01:29.226545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82384 ] 00:32:22.985 [2024-10-09 14:01:29.387546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:22.985 [2024-10-09 14:01:29.434255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.985 [2024-10-09 14:01:29.478356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:22.985 [2024-10-09 14:01:29.478396] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 BaseBdev1_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 true 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 [2024-10-09 14:01:30.227298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:23.920 [2024-10-09 14:01:30.227358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.920 [2024-10-09 14:01:30.227382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:23.920 [2024-10-09 14:01:30.227401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.920 [2024-10-09 14:01:30.230180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.920 [2024-10-09 14:01:30.230229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:23.920 BaseBdev1 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 BaseBdev2_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 true 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 [2024-10-09 14:01:30.270641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:23.920 [2024-10-09 14:01:30.270694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.920 [2024-10-09 14:01:30.270717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:23.920 [2024-10-09 14:01:30.270730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.920 [2024-10-09 14:01:30.273306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.920 [2024-10-09 14:01:30.273346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:23.920 BaseBdev2 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 BaseBdev3_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 true 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 [2024-10-09 14:01:30.304172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:23.920 [2024-10-09 14:01:30.304226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.920 [2024-10-09 14:01:30.304251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:23.920 [2024-10-09 14:01:30.304264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.920 [2024-10-09 14:01:30.307144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.920 [2024-10-09 14:01:30.307186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:23.920 BaseBdev3 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 BaseBdev4_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 true 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 [2024-10-09 14:01:30.337776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:23.920 [2024-10-09 14:01:30.337828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:23.920 [2024-10-09 14:01:30.337855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:23.920 [2024-10-09 14:01:30.337868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:23.920 [2024-10-09 14:01:30.340493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:23.920 [2024-10-09 14:01:30.340533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:23.920 BaseBdev4 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.920 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.920 [2024-10-09 14:01:30.345818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:23.920 [2024-10-09 14:01:30.348190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:23.920 [2024-10-09 14:01:30.348312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:23.920 [2024-10-09 14:01:30.348365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:23.920 [2024-10-09 14:01:30.348584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:32:23.920 [2024-10-09 14:01:30.348598] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:23.920 [2024-10-09 14:01:30.348886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:23.920 [2024-10-09 14:01:30.349040] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:32:23.921 [2024-10-09 14:01:30.349055] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:32:23.921 [2024-10-09 14:01:30.349206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:23.921 "name": "raid_bdev1", 00:32:23.921 "uuid": "4ffa5138-917f-44eb-aa22-468ed55bc455", 00:32:23.921 "strip_size_kb": 64, 00:32:23.921 "state": "online", 00:32:23.921 "raid_level": "raid0", 00:32:23.921 "superblock": true, 00:32:23.921 "num_base_bdevs": 4, 00:32:23.921 "num_base_bdevs_discovered": 4, 00:32:23.921 "num_base_bdevs_operational": 4, 00:32:23.921 "base_bdevs_list": [ 00:32:23.921 { 00:32:23.921 "name": "BaseBdev1", 00:32:23.921 "uuid": "d0eb049a-d722-58d0-9a9e-fc9379ffc9d0", 00:32:23.921 "is_configured": true, 00:32:23.921 "data_offset": 2048, 00:32:23.921 "data_size": 63488 00:32:23.921 }, 00:32:23.921 { 00:32:23.921 "name": "BaseBdev2", 00:32:23.921 "uuid": "f4687f2e-0710-5856-b4fc-5b4e29ceef99", 00:32:23.921 "is_configured": true, 00:32:23.921 "data_offset": 2048, 00:32:23.921 "data_size": 63488 00:32:23.921 }, 00:32:23.921 { 00:32:23.921 "name": "BaseBdev3", 00:32:23.921 "uuid": "178385c1-ac0c-52a8-ba96-8054053a3cf7", 00:32:23.921 "is_configured": true, 00:32:23.921 "data_offset": 2048, 00:32:23.921 "data_size": 63488 00:32:23.921 }, 00:32:23.921 { 00:32:23.921 "name": "BaseBdev4", 00:32:23.921 "uuid": "d99b2c4d-5359-53db-9ed8-b53758934e09", 00:32:23.921 "is_configured": true, 00:32:23.921 "data_offset": 2048, 00:32:23.921 "data_size": 63488 00:32:23.921 } 00:32:23.921 ] 00:32:23.921 }' 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:23.921 14:01:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.488 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:24.488 14:01:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:24.488 [2024-10-09 14:01:30.902342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.425 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:25.425 "name": "raid_bdev1", 00:32:25.425 "uuid": "4ffa5138-917f-44eb-aa22-468ed55bc455", 00:32:25.425 "strip_size_kb": 64, 00:32:25.425 "state": "online", 00:32:25.425 "raid_level": "raid0", 00:32:25.425 "superblock": true, 00:32:25.425 "num_base_bdevs": 4, 00:32:25.425 "num_base_bdevs_discovered": 4, 00:32:25.425 "num_base_bdevs_operational": 4, 00:32:25.425 "base_bdevs_list": [ 00:32:25.426 { 00:32:25.426 "name": "BaseBdev1", 00:32:25.426 "uuid": "d0eb049a-d722-58d0-9a9e-fc9379ffc9d0", 00:32:25.426 "is_configured": true, 00:32:25.426 "data_offset": 2048, 00:32:25.426 "data_size": 63488 00:32:25.426 }, 00:32:25.426 { 00:32:25.426 "name": "BaseBdev2", 00:32:25.426 "uuid": "f4687f2e-0710-5856-b4fc-5b4e29ceef99", 00:32:25.426 "is_configured": true, 00:32:25.426 "data_offset": 2048, 00:32:25.426 "data_size": 63488 00:32:25.426 }, 00:32:25.426 { 00:32:25.426 "name": "BaseBdev3", 00:32:25.426 "uuid": "178385c1-ac0c-52a8-ba96-8054053a3cf7", 00:32:25.426 "is_configured": true, 00:32:25.426 "data_offset": 2048, 00:32:25.426 "data_size": 63488 00:32:25.426 }, 00:32:25.426 { 00:32:25.426 "name": "BaseBdev4", 00:32:25.426 "uuid": "d99b2c4d-5359-53db-9ed8-b53758934e09", 00:32:25.426 "is_configured": true, 00:32:25.426 "data_offset": 2048, 00:32:25.426 "data_size": 63488 00:32:25.426 } 00:32:25.426 ] 00:32:25.426 }' 00:32:25.426 14:01:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:25.426 14:01:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.684 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.685 [2024-10-09 14:01:32.209120] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:25.685 [2024-10-09 14:01:32.209159] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:25.685 [2024-10-09 14:01:32.212077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:25.685 [2024-10-09 14:01:32.212152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:25.685 [2024-10-09 14:01:32.212207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:25.685 [2024-10-09 14:01:32.212228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:32:25.685 { 00:32:25.685 "results": [ 00:32:25.685 { 00:32:25.685 "job": "raid_bdev1", 00:32:25.685 "core_mask": "0x1", 00:32:25.685 "workload": "randrw", 00:32:25.685 "percentage": 50, 00:32:25.685 "status": "finished", 00:32:25.685 "queue_depth": 1, 00:32:25.685 "io_size": 131072, 00:32:25.685 "runtime": 1.304384, 00:32:25.685 "iops": 15542.968941661351, 00:32:25.685 "mibps": 1942.8711177076689, 00:32:25.685 "io_failed": 1, 00:32:25.685 "io_timeout": 0, 00:32:25.685 "avg_latency_us": 89.01400636486406, 00:32:25.685 "min_latency_us": 27.30666666666667, 00:32:25.685 "max_latency_us": 1583.7866666666666 00:32:25.685 } 00:32:25.685 ], 00:32:25.685 "core_count": 1 00:32:25.685 } 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82384 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82384 ']' 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82384 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:25.685 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82384 00:32:25.943 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:25.943 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:25.943 killing process with pid 82384 00:32:25.943 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82384' 00:32:25.943 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82384 00:32:25.943 [2024-10-09 14:01:32.261120] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:25.943 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82384 00:32:25.943 [2024-10-09 14:01:32.297989] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L6JGrxaSpy 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:32:26.203 00:32:26.203 real 0m3.433s 00:32:26.203 user 0m4.398s 00:32:26.203 sys 0m0.589s 00:32:26.203 ************************************ 00:32:26.203 END TEST raid_write_error_test 00:32:26.203 ************************************ 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:26.203 14:01:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.203 14:01:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:26.203 14:01:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:32:26.203 14:01:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:26.203 14:01:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:26.203 14:01:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:26.203 ************************************ 00:32:26.203 START TEST raid_state_function_test 00:32:26.203 ************************************ 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82523 00:32:26.203 Process raid pid: 82523 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82523' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82523 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82523 ']' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:26.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:26.203 14:01:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.203 [2024-10-09 14:01:32.748875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:26.203 [2024-10-09 14:01:32.749364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:26.463 [2024-10-09 14:01:32.935169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.463 [2024-10-09 14:01:32.982474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.721 [2024-10-09 14:01:33.027156] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:26.721 [2024-10-09 14:01:33.027382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.289 [2024-10-09 14:01:33.726872] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:27.289 [2024-10-09 14:01:33.727066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:27.289 [2024-10-09 14:01:33.727102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:27.289 [2024-10-09 14:01:33.727118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:27.289 [2024-10-09 14:01:33.727126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:27.289 [2024-10-09 14:01:33.727146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:27.289 [2024-10-09 14:01:33.727154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:27.289 [2024-10-09 14:01:33.727168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.289 "name": "Existed_Raid", 00:32:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.289 "strip_size_kb": 64, 00:32:27.289 "state": "configuring", 00:32:27.289 "raid_level": "concat", 00:32:27.289 "superblock": false, 00:32:27.289 "num_base_bdevs": 4, 00:32:27.289 "num_base_bdevs_discovered": 0, 00:32:27.289 "num_base_bdevs_operational": 4, 00:32:27.289 "base_bdevs_list": [ 00:32:27.289 { 00:32:27.289 "name": "BaseBdev1", 00:32:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.289 "is_configured": false, 00:32:27.289 "data_offset": 0, 00:32:27.289 "data_size": 0 00:32:27.289 }, 00:32:27.289 { 00:32:27.289 "name": "BaseBdev2", 00:32:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.289 "is_configured": false, 00:32:27.289 "data_offset": 0, 00:32:27.289 "data_size": 0 00:32:27.289 }, 00:32:27.289 { 00:32:27.289 "name": "BaseBdev3", 00:32:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.289 "is_configured": false, 00:32:27.289 "data_offset": 0, 00:32:27.289 "data_size": 0 00:32:27.289 }, 00:32:27.289 { 00:32:27.289 "name": "BaseBdev4", 00:32:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.289 "is_configured": false, 00:32:27.289 "data_offset": 0, 00:32:27.289 "data_size": 0 00:32:27.289 } 00:32:27.289 ] 00:32:27.289 }' 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.289 14:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 [2024-10-09 14:01:34.190889] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:27.857 [2024-10-09 14:01:34.191091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 [2024-10-09 14:01:34.202947] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:27.857 [2024-10-09 14:01:34.202995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:27.857 [2024-10-09 14:01:34.203006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:27.857 [2024-10-09 14:01:34.203020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:27.857 [2024-10-09 14:01:34.203028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:27.857 [2024-10-09 14:01:34.203042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:27.857 [2024-10-09 14:01:34.203050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:27.857 [2024-10-09 14:01:34.203063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 [2024-10-09 14:01:34.220865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:27.857 BaseBdev1 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 [ 00:32:27.857 { 00:32:27.857 "name": "BaseBdev1", 00:32:27.857 "aliases": [ 00:32:27.857 "3b96416c-33e3-40db-8fac-906c73a95c22" 00:32:27.857 ], 00:32:27.857 "product_name": "Malloc disk", 00:32:27.857 "block_size": 512, 00:32:27.857 "num_blocks": 65536, 00:32:27.857 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:27.857 "assigned_rate_limits": { 00:32:27.857 "rw_ios_per_sec": 0, 00:32:27.857 "rw_mbytes_per_sec": 0, 00:32:27.857 "r_mbytes_per_sec": 0, 00:32:27.857 "w_mbytes_per_sec": 0 00:32:27.857 }, 00:32:27.857 "claimed": true, 00:32:27.857 "claim_type": "exclusive_write", 00:32:27.857 "zoned": false, 00:32:27.857 "supported_io_types": { 00:32:27.857 "read": true, 00:32:27.857 "write": true, 00:32:27.857 "unmap": true, 00:32:27.857 "flush": true, 00:32:27.857 "reset": true, 00:32:27.857 "nvme_admin": false, 00:32:27.857 "nvme_io": false, 00:32:27.857 "nvme_io_md": false, 00:32:27.857 "write_zeroes": true, 00:32:27.857 "zcopy": true, 00:32:27.857 "get_zone_info": false, 00:32:27.857 "zone_management": false, 00:32:27.857 "zone_append": false, 00:32:27.857 "compare": false, 00:32:27.857 "compare_and_write": false, 00:32:27.857 "abort": true, 00:32:27.857 "seek_hole": false, 00:32:27.857 "seek_data": false, 00:32:27.857 "copy": true, 00:32:27.857 "nvme_iov_md": false 00:32:27.857 }, 00:32:27.857 "memory_domains": [ 00:32:27.857 { 00:32:27.857 "dma_device_id": "system", 00:32:27.857 "dma_device_type": 1 00:32:27.857 }, 00:32:27.857 { 00:32:27.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.857 "dma_device_type": 2 00:32:27.857 } 00:32:27.857 ], 00:32:27.857 "driver_specific": {} 00:32:27.857 } 00:32:27.857 ] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.857 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.857 "name": "Existed_Raid", 00:32:27.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.857 "strip_size_kb": 64, 00:32:27.857 "state": "configuring", 00:32:27.857 "raid_level": "concat", 00:32:27.857 "superblock": false, 00:32:27.857 "num_base_bdevs": 4, 00:32:27.858 "num_base_bdevs_discovered": 1, 00:32:27.858 "num_base_bdevs_operational": 4, 00:32:27.858 "base_bdevs_list": [ 00:32:27.858 { 00:32:27.858 "name": "BaseBdev1", 00:32:27.858 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:27.858 "is_configured": true, 00:32:27.858 "data_offset": 0, 00:32:27.858 "data_size": 65536 00:32:27.858 }, 00:32:27.858 { 00:32:27.858 "name": "BaseBdev2", 00:32:27.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.858 "is_configured": false, 00:32:27.858 "data_offset": 0, 00:32:27.858 "data_size": 0 00:32:27.858 }, 00:32:27.858 { 00:32:27.858 "name": "BaseBdev3", 00:32:27.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.858 "is_configured": false, 00:32:27.858 "data_offset": 0, 00:32:27.858 "data_size": 0 00:32:27.858 }, 00:32:27.858 { 00:32:27.858 "name": "BaseBdev4", 00:32:27.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.858 "is_configured": false, 00:32:27.858 "data_offset": 0, 00:32:27.858 "data_size": 0 00:32:27.858 } 00:32:27.858 ] 00:32:27.858 }' 00:32:27.858 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.858 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-10-09 14:01:34.721038] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:28.425 [2024-10-09 14:01:34.721092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 [2024-10-09 14:01:34.729074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:28.425 [2024-10-09 14:01:34.731661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:28.425 [2024-10-09 14:01:34.731707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:28.425 [2024-10-09 14:01:34.731719] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:28.425 [2024-10-09 14:01:34.731732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:28.425 [2024-10-09 14:01:34.731741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:28.425 [2024-10-09 14:01:34.731754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.425 "name": "Existed_Raid", 00:32:28.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.425 "strip_size_kb": 64, 00:32:28.425 "state": "configuring", 00:32:28.425 "raid_level": "concat", 00:32:28.425 "superblock": false, 00:32:28.425 "num_base_bdevs": 4, 00:32:28.425 "num_base_bdevs_discovered": 1, 00:32:28.425 "num_base_bdevs_operational": 4, 00:32:28.425 "base_bdevs_list": [ 00:32:28.425 { 00:32:28.425 "name": "BaseBdev1", 00:32:28.425 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:28.425 "is_configured": true, 00:32:28.425 "data_offset": 0, 00:32:28.425 "data_size": 65536 00:32:28.425 }, 00:32:28.425 { 00:32:28.425 "name": "BaseBdev2", 00:32:28.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.425 "is_configured": false, 00:32:28.425 "data_offset": 0, 00:32:28.425 "data_size": 0 00:32:28.425 }, 00:32:28.425 { 00:32:28.425 "name": "BaseBdev3", 00:32:28.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.425 "is_configured": false, 00:32:28.425 "data_offset": 0, 00:32:28.425 "data_size": 0 00:32:28.425 }, 00:32:28.425 { 00:32:28.425 "name": "BaseBdev4", 00:32:28.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.425 "is_configured": false, 00:32:28.425 "data_offset": 0, 00:32:28.425 "data_size": 0 00:32:28.425 } 00:32:28.425 ] 00:32:28.425 }' 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.425 14:01:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.684 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:28.684 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.684 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.943 [2024-10-09 14:01:35.237194] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:28.943 BaseBdev2 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.943 [ 00:32:28.943 { 00:32:28.943 "name": "BaseBdev2", 00:32:28.943 "aliases": [ 00:32:28.943 "babc475d-282a-4d04-8186-f65e18a56d55" 00:32:28.943 ], 00:32:28.943 "product_name": "Malloc disk", 00:32:28.943 "block_size": 512, 00:32:28.943 "num_blocks": 65536, 00:32:28.943 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:28.943 "assigned_rate_limits": { 00:32:28.943 "rw_ios_per_sec": 0, 00:32:28.943 "rw_mbytes_per_sec": 0, 00:32:28.943 "r_mbytes_per_sec": 0, 00:32:28.943 "w_mbytes_per_sec": 0 00:32:28.943 }, 00:32:28.943 "claimed": true, 00:32:28.943 "claim_type": "exclusive_write", 00:32:28.943 "zoned": false, 00:32:28.943 "supported_io_types": { 00:32:28.943 "read": true, 00:32:28.943 "write": true, 00:32:28.943 "unmap": true, 00:32:28.943 "flush": true, 00:32:28.943 "reset": true, 00:32:28.943 "nvme_admin": false, 00:32:28.943 "nvme_io": false, 00:32:28.943 "nvme_io_md": false, 00:32:28.943 "write_zeroes": true, 00:32:28.943 "zcopy": true, 00:32:28.943 "get_zone_info": false, 00:32:28.943 "zone_management": false, 00:32:28.943 "zone_append": false, 00:32:28.943 "compare": false, 00:32:28.943 "compare_and_write": false, 00:32:28.943 "abort": true, 00:32:28.943 "seek_hole": false, 00:32:28.943 "seek_data": false, 00:32:28.943 "copy": true, 00:32:28.943 "nvme_iov_md": false 00:32:28.943 }, 00:32:28.943 "memory_domains": [ 00:32:28.943 { 00:32:28.943 "dma_device_id": "system", 00:32:28.943 "dma_device_type": 1 00:32:28.943 }, 00:32:28.943 { 00:32:28.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.943 "dma_device_type": 2 00:32:28.943 } 00:32:28.943 ], 00:32:28.943 "driver_specific": {} 00:32:28.943 } 00:32:28.943 ] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:28.943 "name": "Existed_Raid", 00:32:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.943 "strip_size_kb": 64, 00:32:28.943 "state": "configuring", 00:32:28.943 "raid_level": "concat", 00:32:28.943 "superblock": false, 00:32:28.943 "num_base_bdevs": 4, 00:32:28.943 "num_base_bdevs_discovered": 2, 00:32:28.943 "num_base_bdevs_operational": 4, 00:32:28.943 "base_bdevs_list": [ 00:32:28.943 { 00:32:28.943 "name": "BaseBdev1", 00:32:28.943 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:28.943 "is_configured": true, 00:32:28.943 "data_offset": 0, 00:32:28.943 "data_size": 65536 00:32:28.943 }, 00:32:28.943 { 00:32:28.943 "name": "BaseBdev2", 00:32:28.943 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:28.943 "is_configured": true, 00:32:28.943 "data_offset": 0, 00:32:28.943 "data_size": 65536 00:32:28.943 }, 00:32:28.943 { 00:32:28.943 "name": "BaseBdev3", 00:32:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.943 "is_configured": false, 00:32:28.943 "data_offset": 0, 00:32:28.943 "data_size": 0 00:32:28.943 }, 00:32:28.943 { 00:32:28.943 "name": "BaseBdev4", 00:32:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.943 "is_configured": false, 00:32:28.943 "data_offset": 0, 00:32:28.943 "data_size": 0 00:32:28.943 } 00:32:28.943 ] 00:32:28.943 }' 00:32:28.943 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:28.944 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.511 [2024-10-09 14:01:35.777166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:29.511 BaseBdev3 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.511 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.512 [ 00:32:29.512 { 00:32:29.512 "name": "BaseBdev3", 00:32:29.512 "aliases": [ 00:32:29.512 "53c74088-199c-482a-af78-66ecab766d59" 00:32:29.512 ], 00:32:29.512 "product_name": "Malloc disk", 00:32:29.512 "block_size": 512, 00:32:29.512 "num_blocks": 65536, 00:32:29.512 "uuid": "53c74088-199c-482a-af78-66ecab766d59", 00:32:29.512 "assigned_rate_limits": { 00:32:29.512 "rw_ios_per_sec": 0, 00:32:29.512 "rw_mbytes_per_sec": 0, 00:32:29.512 "r_mbytes_per_sec": 0, 00:32:29.512 "w_mbytes_per_sec": 0 00:32:29.512 }, 00:32:29.512 "claimed": true, 00:32:29.512 "claim_type": "exclusive_write", 00:32:29.512 "zoned": false, 00:32:29.512 "supported_io_types": { 00:32:29.512 "read": true, 00:32:29.512 "write": true, 00:32:29.512 "unmap": true, 00:32:29.512 "flush": true, 00:32:29.512 "reset": true, 00:32:29.512 "nvme_admin": false, 00:32:29.512 "nvme_io": false, 00:32:29.512 "nvme_io_md": false, 00:32:29.512 "write_zeroes": true, 00:32:29.512 "zcopy": true, 00:32:29.512 "get_zone_info": false, 00:32:29.512 "zone_management": false, 00:32:29.512 "zone_append": false, 00:32:29.512 "compare": false, 00:32:29.512 "compare_and_write": false, 00:32:29.512 "abort": true, 00:32:29.512 "seek_hole": false, 00:32:29.512 "seek_data": false, 00:32:29.512 "copy": true, 00:32:29.512 "nvme_iov_md": false 00:32:29.512 }, 00:32:29.512 "memory_domains": [ 00:32:29.512 { 00:32:29.512 "dma_device_id": "system", 00:32:29.512 "dma_device_type": 1 00:32:29.512 }, 00:32:29.512 { 00:32:29.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.512 "dma_device_type": 2 00:32:29.512 } 00:32:29.512 ], 00:32:29.512 "driver_specific": {} 00:32:29.512 } 00:32:29.512 ] 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:29.512 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:29.513 "name": "Existed_Raid", 00:32:29.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.513 "strip_size_kb": 64, 00:32:29.513 "state": "configuring", 00:32:29.513 "raid_level": "concat", 00:32:29.513 "superblock": false, 00:32:29.513 "num_base_bdevs": 4, 00:32:29.513 "num_base_bdevs_discovered": 3, 00:32:29.513 "num_base_bdevs_operational": 4, 00:32:29.513 "base_bdevs_list": [ 00:32:29.513 { 00:32:29.513 "name": "BaseBdev1", 00:32:29.513 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:29.513 "is_configured": true, 00:32:29.513 "data_offset": 0, 00:32:29.513 "data_size": 65536 00:32:29.513 }, 00:32:29.513 { 00:32:29.513 "name": "BaseBdev2", 00:32:29.513 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:29.513 "is_configured": true, 00:32:29.513 "data_offset": 0, 00:32:29.513 "data_size": 65536 00:32:29.513 }, 00:32:29.513 { 00:32:29.513 "name": "BaseBdev3", 00:32:29.513 "uuid": "53c74088-199c-482a-af78-66ecab766d59", 00:32:29.513 "is_configured": true, 00:32:29.513 "data_offset": 0, 00:32:29.513 "data_size": 65536 00:32:29.513 }, 00:32:29.513 { 00:32:29.513 "name": "BaseBdev4", 00:32:29.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.513 "is_configured": false, 00:32:29.513 "data_offset": 0, 00:32:29.513 "data_size": 0 00:32:29.513 } 00:32:29.513 ] 00:32:29.513 }' 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:29.513 14:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.772 [2024-10-09 14:01:36.284678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:29.772 [2024-10-09 14:01:36.284733] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:32:29.772 [2024-10-09 14:01:36.284746] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:32:29.772 [2024-10-09 14:01:36.285094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:29.772 [2024-10-09 14:01:36.285223] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:32:29.772 [2024-10-09 14:01:36.285238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:32:29.772 [2024-10-09 14:01:36.285470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.772 BaseBdev4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.772 [ 00:32:29.772 { 00:32:29.772 "name": "BaseBdev4", 00:32:29.772 "aliases": [ 00:32:29.772 "238217f6-e69f-4d9f-9547-932c256e50c9" 00:32:29.772 ], 00:32:29.772 "product_name": "Malloc disk", 00:32:29.772 "block_size": 512, 00:32:29.772 "num_blocks": 65536, 00:32:29.772 "uuid": "238217f6-e69f-4d9f-9547-932c256e50c9", 00:32:29.772 "assigned_rate_limits": { 00:32:29.772 "rw_ios_per_sec": 0, 00:32:29.772 "rw_mbytes_per_sec": 0, 00:32:29.772 "r_mbytes_per_sec": 0, 00:32:29.772 "w_mbytes_per_sec": 0 00:32:29.772 }, 00:32:29.772 "claimed": true, 00:32:29.772 "claim_type": "exclusive_write", 00:32:29.772 "zoned": false, 00:32:29.772 "supported_io_types": { 00:32:29.772 "read": true, 00:32:29.772 "write": true, 00:32:29.772 "unmap": true, 00:32:29.772 "flush": true, 00:32:29.772 "reset": true, 00:32:29.772 "nvme_admin": false, 00:32:29.772 "nvme_io": false, 00:32:29.772 "nvme_io_md": false, 00:32:29.772 "write_zeroes": true, 00:32:29.772 "zcopy": true, 00:32:29.772 "get_zone_info": false, 00:32:29.772 "zone_management": false, 00:32:29.772 "zone_append": false, 00:32:29.772 "compare": false, 00:32:29.772 "compare_and_write": false, 00:32:29.772 "abort": true, 00:32:29.772 "seek_hole": false, 00:32:29.772 "seek_data": false, 00:32:29.772 "copy": true, 00:32:29.772 "nvme_iov_md": false 00:32:29.772 }, 00:32:29.772 "memory_domains": [ 00:32:29.772 { 00:32:29.772 "dma_device_id": "system", 00:32:29.772 "dma_device_type": 1 00:32:29.772 }, 00:32:29.772 { 00:32:29.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:29.772 "dma_device_type": 2 00:32:29.772 } 00:32:29.772 ], 00:32:29.772 "driver_specific": {} 00:32:29.772 } 00:32:29.772 ] 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:29.772 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.031 "name": "Existed_Raid", 00:32:30.031 "uuid": "0e7124e8-5146-46bd-973d-113cae9916bc", 00:32:30.031 "strip_size_kb": 64, 00:32:30.031 "state": "online", 00:32:30.031 "raid_level": "concat", 00:32:30.031 "superblock": false, 00:32:30.031 "num_base_bdevs": 4, 00:32:30.031 "num_base_bdevs_discovered": 4, 00:32:30.031 "num_base_bdevs_operational": 4, 00:32:30.031 "base_bdevs_list": [ 00:32:30.031 { 00:32:30.031 "name": "BaseBdev1", 00:32:30.031 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:30.031 "is_configured": true, 00:32:30.031 "data_offset": 0, 00:32:30.031 "data_size": 65536 00:32:30.031 }, 00:32:30.031 { 00:32:30.031 "name": "BaseBdev2", 00:32:30.031 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:30.031 "is_configured": true, 00:32:30.031 "data_offset": 0, 00:32:30.031 "data_size": 65536 00:32:30.031 }, 00:32:30.031 { 00:32:30.031 "name": "BaseBdev3", 00:32:30.031 "uuid": "53c74088-199c-482a-af78-66ecab766d59", 00:32:30.031 "is_configured": true, 00:32:30.031 "data_offset": 0, 00:32:30.031 "data_size": 65536 00:32:30.031 }, 00:32:30.031 { 00:32:30.031 "name": "BaseBdev4", 00:32:30.031 "uuid": "238217f6-e69f-4d9f-9547-932c256e50c9", 00:32:30.031 "is_configured": true, 00:32:30.031 "data_offset": 0, 00:32:30.031 "data_size": 65536 00:32:30.031 } 00:32:30.031 ] 00:32:30.031 }' 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.031 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:30.290 [2024-10-09 14:01:36.777257] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.290 "name": "Existed_Raid", 00:32:30.290 "aliases": [ 00:32:30.290 "0e7124e8-5146-46bd-973d-113cae9916bc" 00:32:30.290 ], 00:32:30.290 "product_name": "Raid Volume", 00:32:30.290 "block_size": 512, 00:32:30.290 "num_blocks": 262144, 00:32:30.290 "uuid": "0e7124e8-5146-46bd-973d-113cae9916bc", 00:32:30.290 "assigned_rate_limits": { 00:32:30.290 "rw_ios_per_sec": 0, 00:32:30.290 "rw_mbytes_per_sec": 0, 00:32:30.290 "r_mbytes_per_sec": 0, 00:32:30.290 "w_mbytes_per_sec": 0 00:32:30.290 }, 00:32:30.290 "claimed": false, 00:32:30.290 "zoned": false, 00:32:30.290 "supported_io_types": { 00:32:30.290 "read": true, 00:32:30.290 "write": true, 00:32:30.290 "unmap": true, 00:32:30.290 "flush": true, 00:32:30.290 "reset": true, 00:32:30.290 "nvme_admin": false, 00:32:30.290 "nvme_io": false, 00:32:30.290 "nvme_io_md": false, 00:32:30.290 "write_zeroes": true, 00:32:30.290 "zcopy": false, 00:32:30.290 "get_zone_info": false, 00:32:30.290 "zone_management": false, 00:32:30.290 "zone_append": false, 00:32:30.290 "compare": false, 00:32:30.290 "compare_and_write": false, 00:32:30.290 "abort": false, 00:32:30.290 "seek_hole": false, 00:32:30.290 "seek_data": false, 00:32:30.290 "copy": false, 00:32:30.290 "nvme_iov_md": false 00:32:30.290 }, 00:32:30.290 "memory_domains": [ 00:32:30.290 { 00:32:30.290 "dma_device_id": "system", 00:32:30.290 "dma_device_type": 1 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.290 "dma_device_type": 2 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "system", 00:32:30.290 "dma_device_type": 1 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.290 "dma_device_type": 2 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "system", 00:32:30.290 "dma_device_type": 1 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.290 "dma_device_type": 2 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "system", 00:32:30.290 "dma_device_type": 1 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.290 "dma_device_type": 2 00:32:30.290 } 00:32:30.290 ], 00:32:30.290 "driver_specific": { 00:32:30.290 "raid": { 00:32:30.290 "uuid": "0e7124e8-5146-46bd-973d-113cae9916bc", 00:32:30.290 "strip_size_kb": 64, 00:32:30.290 "state": "online", 00:32:30.290 "raid_level": "concat", 00:32:30.290 "superblock": false, 00:32:30.290 "num_base_bdevs": 4, 00:32:30.290 "num_base_bdevs_discovered": 4, 00:32:30.290 "num_base_bdevs_operational": 4, 00:32:30.290 "base_bdevs_list": [ 00:32:30.290 { 00:32:30.290 "name": "BaseBdev1", 00:32:30.290 "uuid": "3b96416c-33e3-40db-8fac-906c73a95c22", 00:32:30.290 "is_configured": true, 00:32:30.290 "data_offset": 0, 00:32:30.290 "data_size": 65536 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "name": "BaseBdev2", 00:32:30.290 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:30.290 "is_configured": true, 00:32:30.290 "data_offset": 0, 00:32:30.290 "data_size": 65536 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "name": "BaseBdev3", 00:32:30.290 "uuid": "53c74088-199c-482a-af78-66ecab766d59", 00:32:30.290 "is_configured": true, 00:32:30.290 "data_offset": 0, 00:32:30.290 "data_size": 65536 00:32:30.290 }, 00:32:30.290 { 00:32:30.290 "name": "BaseBdev4", 00:32:30.290 "uuid": "238217f6-e69f-4d9f-9547-932c256e50c9", 00:32:30.290 "is_configured": true, 00:32:30.290 "data_offset": 0, 00:32:30.290 "data_size": 65536 00:32:30.290 } 00:32:30.290 ] 00:32:30.290 } 00:32:30.290 } 00:32:30.290 }' 00:32:30.290 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:30.549 BaseBdev2 00:32:30.549 BaseBdev3 00:32:30.549 BaseBdev4' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.549 14:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.549 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:30.550 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:30.550 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:30.550 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.550 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.550 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.809 [2024-10-09 14:01:37.121059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:30.809 [2024-10-09 14:01:37.121225] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:30.809 [2024-10-09 14:01:37.121314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:30.809 "name": "Existed_Raid", 00:32:30.809 "uuid": "0e7124e8-5146-46bd-973d-113cae9916bc", 00:32:30.809 "strip_size_kb": 64, 00:32:30.809 "state": "offline", 00:32:30.809 "raid_level": "concat", 00:32:30.809 "superblock": false, 00:32:30.809 "num_base_bdevs": 4, 00:32:30.809 "num_base_bdevs_discovered": 3, 00:32:30.809 "num_base_bdevs_operational": 3, 00:32:30.809 "base_bdevs_list": [ 00:32:30.809 { 00:32:30.809 "name": null, 00:32:30.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.809 "is_configured": false, 00:32:30.809 "data_offset": 0, 00:32:30.809 "data_size": 65536 00:32:30.809 }, 00:32:30.809 { 00:32:30.809 "name": "BaseBdev2", 00:32:30.809 "uuid": "babc475d-282a-4d04-8186-f65e18a56d55", 00:32:30.809 "is_configured": true, 00:32:30.809 "data_offset": 0, 00:32:30.809 "data_size": 65536 00:32:30.809 }, 00:32:30.809 { 00:32:30.809 "name": "BaseBdev3", 00:32:30.809 "uuid": "53c74088-199c-482a-af78-66ecab766d59", 00:32:30.809 "is_configured": true, 00:32:30.809 "data_offset": 0, 00:32:30.809 "data_size": 65536 00:32:30.809 }, 00:32:30.809 { 00:32:30.809 "name": "BaseBdev4", 00:32:30.809 "uuid": "238217f6-e69f-4d9f-9547-932c256e50c9", 00:32:30.809 "is_configured": true, 00:32:30.809 "data_offset": 0, 00:32:30.809 "data_size": 65536 00:32:30.809 } 00:32:30.809 ] 00:32:30.809 }' 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:30.809 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.068 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.327 [2024-10-09 14:01:37.630368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.327 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.327 [2024-10-09 14:01:37.694399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 [2024-10-09 14:01:37.762334] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:31.328 [2024-10-09 14:01:37.762377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 BaseBdev2 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.328 [ 00:32:31.328 { 00:32:31.328 "name": "BaseBdev2", 00:32:31.328 "aliases": [ 00:32:31.328 "436ea05f-10da-4d73-bea5-9c2fd2091b40" 00:32:31.328 ], 00:32:31.328 "product_name": "Malloc disk", 00:32:31.328 "block_size": 512, 00:32:31.328 "num_blocks": 65536, 00:32:31.328 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:31.328 "assigned_rate_limits": { 00:32:31.328 "rw_ios_per_sec": 0, 00:32:31.328 "rw_mbytes_per_sec": 0, 00:32:31.328 "r_mbytes_per_sec": 0, 00:32:31.328 "w_mbytes_per_sec": 0 00:32:31.328 }, 00:32:31.328 "claimed": false, 00:32:31.328 "zoned": false, 00:32:31.328 "supported_io_types": { 00:32:31.328 "read": true, 00:32:31.328 "write": true, 00:32:31.328 "unmap": true, 00:32:31.328 "flush": true, 00:32:31.328 "reset": true, 00:32:31.328 "nvme_admin": false, 00:32:31.328 "nvme_io": false, 00:32:31.328 "nvme_io_md": false, 00:32:31.328 "write_zeroes": true, 00:32:31.328 "zcopy": true, 00:32:31.328 "get_zone_info": false, 00:32:31.328 "zone_management": false, 00:32:31.328 "zone_append": false, 00:32:31.328 "compare": false, 00:32:31.328 "compare_and_write": false, 00:32:31.328 "abort": true, 00:32:31.328 "seek_hole": false, 00:32:31.328 "seek_data": false, 00:32:31.328 "copy": true, 00:32:31.328 "nvme_iov_md": false 00:32:31.328 }, 00:32:31.328 "memory_domains": [ 00:32:31.328 { 00:32:31.328 "dma_device_id": "system", 00:32:31.328 "dma_device_type": 1 00:32:31.328 }, 00:32:31.328 { 00:32:31.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.328 "dma_device_type": 2 00:32:31.328 } 00:32:31.328 ], 00:32:31.328 "driver_specific": {} 00:32:31.328 } 00:32:31.328 ] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.328 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 BaseBdev3 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 [ 00:32:31.587 { 00:32:31.587 "name": "BaseBdev3", 00:32:31.587 "aliases": [ 00:32:31.587 "cd439b91-cbad-4747-adf6-1cd17de7d524" 00:32:31.587 ], 00:32:31.587 "product_name": "Malloc disk", 00:32:31.587 "block_size": 512, 00:32:31.587 "num_blocks": 65536, 00:32:31.587 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:31.587 "assigned_rate_limits": { 00:32:31.587 "rw_ios_per_sec": 0, 00:32:31.587 "rw_mbytes_per_sec": 0, 00:32:31.587 "r_mbytes_per_sec": 0, 00:32:31.587 "w_mbytes_per_sec": 0 00:32:31.587 }, 00:32:31.587 "claimed": false, 00:32:31.587 "zoned": false, 00:32:31.587 "supported_io_types": { 00:32:31.587 "read": true, 00:32:31.587 "write": true, 00:32:31.587 "unmap": true, 00:32:31.587 "flush": true, 00:32:31.587 "reset": true, 00:32:31.587 "nvme_admin": false, 00:32:31.587 "nvme_io": false, 00:32:31.587 "nvme_io_md": false, 00:32:31.587 "write_zeroes": true, 00:32:31.587 "zcopy": true, 00:32:31.587 "get_zone_info": false, 00:32:31.587 "zone_management": false, 00:32:31.587 "zone_append": false, 00:32:31.587 "compare": false, 00:32:31.587 "compare_and_write": false, 00:32:31.587 "abort": true, 00:32:31.587 "seek_hole": false, 00:32:31.587 "seek_data": false, 00:32:31.587 "copy": true, 00:32:31.587 "nvme_iov_md": false 00:32:31.587 }, 00:32:31.587 "memory_domains": [ 00:32:31.587 { 00:32:31.587 "dma_device_id": "system", 00:32:31.587 "dma_device_type": 1 00:32:31.587 }, 00:32:31.587 { 00:32:31.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.587 "dma_device_type": 2 00:32:31.587 } 00:32:31.587 ], 00:32:31.587 "driver_specific": {} 00:32:31.587 } 00:32:31.587 ] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 BaseBdev4 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.587 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.587 [ 00:32:31.587 { 00:32:31.587 "name": "BaseBdev4", 00:32:31.587 "aliases": [ 00:32:31.587 "17862802-13e5-4713-ad81-43acbb2e0ae2" 00:32:31.587 ], 00:32:31.587 "product_name": "Malloc disk", 00:32:31.587 "block_size": 512, 00:32:31.587 "num_blocks": 65536, 00:32:31.587 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:31.587 "assigned_rate_limits": { 00:32:31.587 "rw_ios_per_sec": 0, 00:32:31.587 "rw_mbytes_per_sec": 0, 00:32:31.587 "r_mbytes_per_sec": 0, 00:32:31.587 "w_mbytes_per_sec": 0 00:32:31.587 }, 00:32:31.587 "claimed": false, 00:32:31.587 "zoned": false, 00:32:31.587 "supported_io_types": { 00:32:31.587 "read": true, 00:32:31.587 "write": true, 00:32:31.587 "unmap": true, 00:32:31.587 "flush": true, 00:32:31.587 "reset": true, 00:32:31.587 "nvme_admin": false, 00:32:31.587 "nvme_io": false, 00:32:31.587 "nvme_io_md": false, 00:32:31.587 "write_zeroes": true, 00:32:31.587 "zcopy": true, 00:32:31.587 "get_zone_info": false, 00:32:31.587 "zone_management": false, 00:32:31.587 "zone_append": false, 00:32:31.587 "compare": false, 00:32:31.588 "compare_and_write": false, 00:32:31.588 "abort": true, 00:32:31.588 "seek_hole": false, 00:32:31.588 "seek_data": false, 00:32:31.588 "copy": true, 00:32:31.588 "nvme_iov_md": false 00:32:31.588 }, 00:32:31.588 "memory_domains": [ 00:32:31.588 { 00:32:31.588 "dma_device_id": "system", 00:32:31.588 "dma_device_type": 1 00:32:31.588 }, 00:32:31.588 { 00:32:31.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.588 "dma_device_type": 2 00:32:31.588 } 00:32:31.588 ], 00:32:31.588 "driver_specific": {} 00:32:31.588 } 00:32:31.588 ] 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.588 [2024-10-09 14:01:37.985765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:31.588 [2024-10-09 14:01:37.985911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:31.588 [2024-10-09 14:01:37.986028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:31.588 [2024-10-09 14:01:37.988223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:31.588 [2024-10-09 14:01:37.988377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.588 14:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:31.588 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.588 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.588 "name": "Existed_Raid", 00:32:31.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.588 "strip_size_kb": 64, 00:32:31.588 "state": "configuring", 00:32:31.588 "raid_level": "concat", 00:32:31.588 "superblock": false, 00:32:31.588 "num_base_bdevs": 4, 00:32:31.588 "num_base_bdevs_discovered": 3, 00:32:31.588 "num_base_bdevs_operational": 4, 00:32:31.588 "base_bdevs_list": [ 00:32:31.588 { 00:32:31.588 "name": "BaseBdev1", 00:32:31.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.588 "is_configured": false, 00:32:31.588 "data_offset": 0, 00:32:31.588 "data_size": 0 00:32:31.588 }, 00:32:31.588 { 00:32:31.588 "name": "BaseBdev2", 00:32:31.588 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:31.588 "is_configured": true, 00:32:31.588 "data_offset": 0, 00:32:31.588 "data_size": 65536 00:32:31.588 }, 00:32:31.588 { 00:32:31.588 "name": "BaseBdev3", 00:32:31.588 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:31.588 "is_configured": true, 00:32:31.588 "data_offset": 0, 00:32:31.588 "data_size": 65536 00:32:31.588 }, 00:32:31.588 { 00:32:31.588 "name": "BaseBdev4", 00:32:31.588 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:31.588 "is_configured": true, 00:32:31.588 "data_offset": 0, 00:32:31.588 "data_size": 65536 00:32:31.588 } 00:32:31.588 ] 00:32:31.588 }' 00:32:31.588 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.588 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.155 [2024-10-09 14:01:38.413873] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.155 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.156 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.156 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.156 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.156 "name": "Existed_Raid", 00:32:32.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.156 "strip_size_kb": 64, 00:32:32.156 "state": "configuring", 00:32:32.156 "raid_level": "concat", 00:32:32.156 "superblock": false, 00:32:32.156 "num_base_bdevs": 4, 00:32:32.156 "num_base_bdevs_discovered": 2, 00:32:32.156 "num_base_bdevs_operational": 4, 00:32:32.156 "base_bdevs_list": [ 00:32:32.156 { 00:32:32.156 "name": "BaseBdev1", 00:32:32.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.156 "is_configured": false, 00:32:32.156 "data_offset": 0, 00:32:32.156 "data_size": 0 00:32:32.156 }, 00:32:32.156 { 00:32:32.156 "name": null, 00:32:32.156 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:32.156 "is_configured": false, 00:32:32.156 "data_offset": 0, 00:32:32.156 "data_size": 65536 00:32:32.156 }, 00:32:32.156 { 00:32:32.156 "name": "BaseBdev3", 00:32:32.156 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:32.156 "is_configured": true, 00:32:32.156 "data_offset": 0, 00:32:32.156 "data_size": 65536 00:32:32.156 }, 00:32:32.156 { 00:32:32.156 "name": "BaseBdev4", 00:32:32.156 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:32.156 "is_configured": true, 00:32:32.156 "data_offset": 0, 00:32:32.156 "data_size": 65536 00:32:32.156 } 00:32:32.156 ] 00:32:32.156 }' 00:32:32.156 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.156 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.414 [2024-10-09 14:01:38.889103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:32.414 BaseBdev1 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.414 [ 00:32:32.414 { 00:32:32.414 "name": "BaseBdev1", 00:32:32.414 "aliases": [ 00:32:32.414 "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542" 00:32:32.414 ], 00:32:32.414 "product_name": "Malloc disk", 00:32:32.414 "block_size": 512, 00:32:32.414 "num_blocks": 65536, 00:32:32.414 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:32.414 "assigned_rate_limits": { 00:32:32.414 "rw_ios_per_sec": 0, 00:32:32.414 "rw_mbytes_per_sec": 0, 00:32:32.414 "r_mbytes_per_sec": 0, 00:32:32.414 "w_mbytes_per_sec": 0 00:32:32.414 }, 00:32:32.414 "claimed": true, 00:32:32.414 "claim_type": "exclusive_write", 00:32:32.414 "zoned": false, 00:32:32.414 "supported_io_types": { 00:32:32.414 "read": true, 00:32:32.414 "write": true, 00:32:32.414 "unmap": true, 00:32:32.414 "flush": true, 00:32:32.414 "reset": true, 00:32:32.414 "nvme_admin": false, 00:32:32.414 "nvme_io": false, 00:32:32.414 "nvme_io_md": false, 00:32:32.414 "write_zeroes": true, 00:32:32.414 "zcopy": true, 00:32:32.414 "get_zone_info": false, 00:32:32.414 "zone_management": false, 00:32:32.414 "zone_append": false, 00:32:32.414 "compare": false, 00:32:32.414 "compare_and_write": false, 00:32:32.414 "abort": true, 00:32:32.414 "seek_hole": false, 00:32:32.414 "seek_data": false, 00:32:32.414 "copy": true, 00:32:32.414 "nvme_iov_md": false 00:32:32.414 }, 00:32:32.414 "memory_domains": [ 00:32:32.414 { 00:32:32.414 "dma_device_id": "system", 00:32:32.414 "dma_device_type": 1 00:32:32.414 }, 00:32:32.414 { 00:32:32.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.414 "dma_device_type": 2 00:32:32.414 } 00:32:32.414 ], 00:32:32.414 "driver_specific": {} 00:32:32.414 } 00:32:32.414 ] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.414 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.415 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.415 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.415 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.415 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.415 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.673 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.673 "name": "Existed_Raid", 00:32:32.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.673 "strip_size_kb": 64, 00:32:32.673 "state": "configuring", 00:32:32.673 "raid_level": "concat", 00:32:32.673 "superblock": false, 00:32:32.673 "num_base_bdevs": 4, 00:32:32.673 "num_base_bdevs_discovered": 3, 00:32:32.673 "num_base_bdevs_operational": 4, 00:32:32.673 "base_bdevs_list": [ 00:32:32.673 { 00:32:32.673 "name": "BaseBdev1", 00:32:32.673 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:32.673 "is_configured": true, 00:32:32.673 "data_offset": 0, 00:32:32.673 "data_size": 65536 00:32:32.673 }, 00:32:32.673 { 00:32:32.673 "name": null, 00:32:32.673 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:32.673 "is_configured": false, 00:32:32.673 "data_offset": 0, 00:32:32.673 "data_size": 65536 00:32:32.673 }, 00:32:32.673 { 00:32:32.673 "name": "BaseBdev3", 00:32:32.673 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:32.673 "is_configured": true, 00:32:32.673 "data_offset": 0, 00:32:32.673 "data_size": 65536 00:32:32.673 }, 00:32:32.673 { 00:32:32.673 "name": "BaseBdev4", 00:32:32.673 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:32.673 "is_configured": true, 00:32:32.673 "data_offset": 0, 00:32:32.673 "data_size": 65536 00:32:32.673 } 00:32:32.673 ] 00:32:32.673 }' 00:32:32.673 14:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.673 14:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.932 [2024-10-09 14:01:39.417253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:32.932 "name": "Existed_Raid", 00:32:32.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.932 "strip_size_kb": 64, 00:32:32.932 "state": "configuring", 00:32:32.932 "raid_level": "concat", 00:32:32.932 "superblock": false, 00:32:32.932 "num_base_bdevs": 4, 00:32:32.932 "num_base_bdevs_discovered": 2, 00:32:32.932 "num_base_bdevs_operational": 4, 00:32:32.932 "base_bdevs_list": [ 00:32:32.932 { 00:32:32.932 "name": "BaseBdev1", 00:32:32.932 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:32.932 "is_configured": true, 00:32:32.932 "data_offset": 0, 00:32:32.932 "data_size": 65536 00:32:32.932 }, 00:32:32.932 { 00:32:32.932 "name": null, 00:32:32.932 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:32.932 "is_configured": false, 00:32:32.932 "data_offset": 0, 00:32:32.932 "data_size": 65536 00:32:32.932 }, 00:32:32.932 { 00:32:32.932 "name": null, 00:32:32.932 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:32.932 "is_configured": false, 00:32:32.932 "data_offset": 0, 00:32:32.932 "data_size": 65536 00:32:32.932 }, 00:32:32.932 { 00:32:32.932 "name": "BaseBdev4", 00:32:32.932 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:32.932 "is_configured": true, 00:32:32.932 "data_offset": 0, 00:32:32.932 "data_size": 65536 00:32:32.932 } 00:32:32.932 ] 00:32:32.932 }' 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:32.932 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.499 [2024-10-09 14:01:39.905409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:33.499 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:33.500 "name": "Existed_Raid", 00:32:33.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.500 "strip_size_kb": 64, 00:32:33.500 "state": "configuring", 00:32:33.500 "raid_level": "concat", 00:32:33.500 "superblock": false, 00:32:33.500 "num_base_bdevs": 4, 00:32:33.500 "num_base_bdevs_discovered": 3, 00:32:33.500 "num_base_bdevs_operational": 4, 00:32:33.500 "base_bdevs_list": [ 00:32:33.500 { 00:32:33.500 "name": "BaseBdev1", 00:32:33.500 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:33.500 "is_configured": true, 00:32:33.500 "data_offset": 0, 00:32:33.500 "data_size": 65536 00:32:33.500 }, 00:32:33.500 { 00:32:33.500 "name": null, 00:32:33.500 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:33.500 "is_configured": false, 00:32:33.500 "data_offset": 0, 00:32:33.500 "data_size": 65536 00:32:33.500 }, 00:32:33.500 { 00:32:33.500 "name": "BaseBdev3", 00:32:33.500 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:33.500 "is_configured": true, 00:32:33.500 "data_offset": 0, 00:32:33.500 "data_size": 65536 00:32:33.500 }, 00:32:33.500 { 00:32:33.500 "name": "BaseBdev4", 00:32:33.500 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:33.500 "is_configured": true, 00:32:33.500 "data_offset": 0, 00:32:33.500 "data_size": 65536 00:32:33.500 } 00:32:33.500 ] 00:32:33.500 }' 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:33.500 14:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 [2024-10-09 14:01:40.469508] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.065 "name": "Existed_Raid", 00:32:34.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.065 "strip_size_kb": 64, 00:32:34.065 "state": "configuring", 00:32:34.065 "raid_level": "concat", 00:32:34.065 "superblock": false, 00:32:34.065 "num_base_bdevs": 4, 00:32:34.065 "num_base_bdevs_discovered": 2, 00:32:34.065 "num_base_bdevs_operational": 4, 00:32:34.065 "base_bdevs_list": [ 00:32:34.065 { 00:32:34.065 "name": null, 00:32:34.065 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:34.065 "is_configured": false, 00:32:34.065 "data_offset": 0, 00:32:34.065 "data_size": 65536 00:32:34.065 }, 00:32:34.065 { 00:32:34.065 "name": null, 00:32:34.065 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:34.065 "is_configured": false, 00:32:34.065 "data_offset": 0, 00:32:34.065 "data_size": 65536 00:32:34.065 }, 00:32:34.065 { 00:32:34.065 "name": "BaseBdev3", 00:32:34.065 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:34.065 "is_configured": true, 00:32:34.065 "data_offset": 0, 00:32:34.065 "data_size": 65536 00:32:34.065 }, 00:32:34.065 { 00:32:34.065 "name": "BaseBdev4", 00:32:34.065 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:34.065 "is_configured": true, 00:32:34.065 "data_offset": 0, 00:32:34.065 "data_size": 65536 00:32:34.065 } 00:32:34.065 ] 00:32:34.065 }' 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.065 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.628 [2024-10-09 14:01:40.972252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.628 14:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:34.628 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.628 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:34.628 "name": "Existed_Raid", 00:32:34.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.628 "strip_size_kb": 64, 00:32:34.628 "state": "configuring", 00:32:34.628 "raid_level": "concat", 00:32:34.628 "superblock": false, 00:32:34.628 "num_base_bdevs": 4, 00:32:34.628 "num_base_bdevs_discovered": 3, 00:32:34.628 "num_base_bdevs_operational": 4, 00:32:34.628 "base_bdevs_list": [ 00:32:34.628 { 00:32:34.628 "name": null, 00:32:34.628 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:34.628 "is_configured": false, 00:32:34.628 "data_offset": 0, 00:32:34.628 "data_size": 65536 00:32:34.628 }, 00:32:34.628 { 00:32:34.628 "name": "BaseBdev2", 00:32:34.628 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:34.628 "is_configured": true, 00:32:34.628 "data_offset": 0, 00:32:34.628 "data_size": 65536 00:32:34.628 }, 00:32:34.628 { 00:32:34.628 "name": "BaseBdev3", 00:32:34.628 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:34.628 "is_configured": true, 00:32:34.628 "data_offset": 0, 00:32:34.628 "data_size": 65536 00:32:34.628 }, 00:32:34.628 { 00:32:34.628 "name": "BaseBdev4", 00:32:34.628 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:34.628 "is_configured": true, 00:32:34.628 "data_offset": 0, 00:32:34.628 "data_size": 65536 00:32:34.628 } 00:32:34.628 ] 00:32:34.628 }' 00:32:34.629 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:34.629 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.886 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:34.886 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:34.886 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.886 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:34.886 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ec30592f-7ee4-4bcd-8ba3-5d0da5e09542 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.144 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.144 [2024-10-09 14:01:41.507455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:35.145 [2024-10-09 14:01:41.507502] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:32:35.145 [2024-10-09 14:01:41.507511] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:32:35.145 [2024-10-09 14:01:41.507806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:35.145 [2024-10-09 14:01:41.507917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:32:35.145 [2024-10-09 14:01:41.507938] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:32:35.145 [2024-10-09 14:01:41.508111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.145 NewBaseBdev 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.145 [ 00:32:35.145 { 00:32:35.145 "name": "NewBaseBdev", 00:32:35.145 "aliases": [ 00:32:35.145 "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542" 00:32:35.145 ], 00:32:35.145 "product_name": "Malloc disk", 00:32:35.145 "block_size": 512, 00:32:35.145 "num_blocks": 65536, 00:32:35.145 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:35.145 "assigned_rate_limits": { 00:32:35.145 "rw_ios_per_sec": 0, 00:32:35.145 "rw_mbytes_per_sec": 0, 00:32:35.145 "r_mbytes_per_sec": 0, 00:32:35.145 "w_mbytes_per_sec": 0 00:32:35.145 }, 00:32:35.145 "claimed": true, 00:32:35.145 "claim_type": "exclusive_write", 00:32:35.145 "zoned": false, 00:32:35.145 "supported_io_types": { 00:32:35.145 "read": true, 00:32:35.145 "write": true, 00:32:35.145 "unmap": true, 00:32:35.145 "flush": true, 00:32:35.145 "reset": true, 00:32:35.145 "nvme_admin": false, 00:32:35.145 "nvme_io": false, 00:32:35.145 "nvme_io_md": false, 00:32:35.145 "write_zeroes": true, 00:32:35.145 "zcopy": true, 00:32:35.145 "get_zone_info": false, 00:32:35.145 "zone_management": false, 00:32:35.145 "zone_append": false, 00:32:35.145 "compare": false, 00:32:35.145 "compare_and_write": false, 00:32:35.145 "abort": true, 00:32:35.145 "seek_hole": false, 00:32:35.145 "seek_data": false, 00:32:35.145 "copy": true, 00:32:35.145 "nvme_iov_md": false 00:32:35.145 }, 00:32:35.145 "memory_domains": [ 00:32:35.145 { 00:32:35.145 "dma_device_id": "system", 00:32:35.145 "dma_device_type": 1 00:32:35.145 }, 00:32:35.145 { 00:32:35.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.145 "dma_device_type": 2 00:32:35.145 } 00:32:35.145 ], 00:32:35.145 "driver_specific": {} 00:32:35.145 } 00:32:35.145 ] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:35.145 "name": "Existed_Raid", 00:32:35.145 "uuid": "9bf95c77-d942-4433-9686-2d3dfbce7404", 00:32:35.145 "strip_size_kb": 64, 00:32:35.145 "state": "online", 00:32:35.145 "raid_level": "concat", 00:32:35.145 "superblock": false, 00:32:35.145 "num_base_bdevs": 4, 00:32:35.145 "num_base_bdevs_discovered": 4, 00:32:35.145 "num_base_bdevs_operational": 4, 00:32:35.145 "base_bdevs_list": [ 00:32:35.145 { 00:32:35.145 "name": "NewBaseBdev", 00:32:35.145 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:35.145 "is_configured": true, 00:32:35.145 "data_offset": 0, 00:32:35.145 "data_size": 65536 00:32:35.145 }, 00:32:35.145 { 00:32:35.145 "name": "BaseBdev2", 00:32:35.145 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:35.145 "is_configured": true, 00:32:35.145 "data_offset": 0, 00:32:35.145 "data_size": 65536 00:32:35.145 }, 00:32:35.145 { 00:32:35.145 "name": "BaseBdev3", 00:32:35.145 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:35.145 "is_configured": true, 00:32:35.145 "data_offset": 0, 00:32:35.145 "data_size": 65536 00:32:35.145 }, 00:32:35.145 { 00:32:35.145 "name": "BaseBdev4", 00:32:35.145 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:35.145 "is_configured": true, 00:32:35.145 "data_offset": 0, 00:32:35.145 "data_size": 65536 00:32:35.145 } 00:32:35.145 ] 00:32:35.145 }' 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:35.145 14:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.722 14:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.722 [2024-10-09 14:01:42.011999] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:35.722 "name": "Existed_Raid", 00:32:35.722 "aliases": [ 00:32:35.722 "9bf95c77-d942-4433-9686-2d3dfbce7404" 00:32:35.722 ], 00:32:35.722 "product_name": "Raid Volume", 00:32:35.722 "block_size": 512, 00:32:35.722 "num_blocks": 262144, 00:32:35.722 "uuid": "9bf95c77-d942-4433-9686-2d3dfbce7404", 00:32:35.722 "assigned_rate_limits": { 00:32:35.722 "rw_ios_per_sec": 0, 00:32:35.722 "rw_mbytes_per_sec": 0, 00:32:35.722 "r_mbytes_per_sec": 0, 00:32:35.722 "w_mbytes_per_sec": 0 00:32:35.722 }, 00:32:35.722 "claimed": false, 00:32:35.722 "zoned": false, 00:32:35.722 "supported_io_types": { 00:32:35.722 "read": true, 00:32:35.722 "write": true, 00:32:35.722 "unmap": true, 00:32:35.722 "flush": true, 00:32:35.722 "reset": true, 00:32:35.722 "nvme_admin": false, 00:32:35.722 "nvme_io": false, 00:32:35.722 "nvme_io_md": false, 00:32:35.722 "write_zeroes": true, 00:32:35.722 "zcopy": false, 00:32:35.722 "get_zone_info": false, 00:32:35.722 "zone_management": false, 00:32:35.722 "zone_append": false, 00:32:35.722 "compare": false, 00:32:35.722 "compare_and_write": false, 00:32:35.722 "abort": false, 00:32:35.722 "seek_hole": false, 00:32:35.722 "seek_data": false, 00:32:35.722 "copy": false, 00:32:35.722 "nvme_iov_md": false 00:32:35.722 }, 00:32:35.722 "memory_domains": [ 00:32:35.722 { 00:32:35.722 "dma_device_id": "system", 00:32:35.722 "dma_device_type": 1 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.722 "dma_device_type": 2 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "system", 00:32:35.722 "dma_device_type": 1 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.722 "dma_device_type": 2 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "system", 00:32:35.722 "dma_device_type": 1 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.722 "dma_device_type": 2 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "system", 00:32:35.722 "dma_device_type": 1 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.722 "dma_device_type": 2 00:32:35.722 } 00:32:35.722 ], 00:32:35.722 "driver_specific": { 00:32:35.722 "raid": { 00:32:35.722 "uuid": "9bf95c77-d942-4433-9686-2d3dfbce7404", 00:32:35.722 "strip_size_kb": 64, 00:32:35.722 "state": "online", 00:32:35.722 "raid_level": "concat", 00:32:35.722 "superblock": false, 00:32:35.722 "num_base_bdevs": 4, 00:32:35.722 "num_base_bdevs_discovered": 4, 00:32:35.722 "num_base_bdevs_operational": 4, 00:32:35.722 "base_bdevs_list": [ 00:32:35.722 { 00:32:35.722 "name": "NewBaseBdev", 00:32:35.722 "uuid": "ec30592f-7ee4-4bcd-8ba3-5d0da5e09542", 00:32:35.722 "is_configured": true, 00:32:35.722 "data_offset": 0, 00:32:35.722 "data_size": 65536 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "name": "BaseBdev2", 00:32:35.722 "uuid": "436ea05f-10da-4d73-bea5-9c2fd2091b40", 00:32:35.722 "is_configured": true, 00:32:35.722 "data_offset": 0, 00:32:35.722 "data_size": 65536 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "name": "BaseBdev3", 00:32:35.722 "uuid": "cd439b91-cbad-4747-adf6-1cd17de7d524", 00:32:35.722 "is_configured": true, 00:32:35.722 "data_offset": 0, 00:32:35.722 "data_size": 65536 00:32:35.722 }, 00:32:35.722 { 00:32:35.722 "name": "BaseBdev4", 00:32:35.722 "uuid": "17862802-13e5-4713-ad81-43acbb2e0ae2", 00:32:35.722 "is_configured": true, 00:32:35.722 "data_offset": 0, 00:32:35.722 "data_size": 65536 00:32:35.722 } 00:32:35.722 ] 00:32:35.722 } 00:32:35.722 } 00:32:35.722 }' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:35.722 BaseBdev2 00:32:35.722 BaseBdev3 00:32:35.722 BaseBdev4' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.722 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.723 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:35.723 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.723 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.723 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.723 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:35.999 [2024-10-09 14:01:42.335729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:35.999 [2024-10-09 14:01:42.335761] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:35.999 [2024-10-09 14:01:42.335838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:35.999 [2024-10-09 14:01:42.335910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:35.999 [2024-10-09 14:01:42.335922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82523 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82523 ']' 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82523 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82523 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:35.999 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:35.999 killing process with pid 82523 00:32:36.000 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82523' 00:32:36.000 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82523 00:32:36.000 [2024-10-09 14:01:42.382281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:36.000 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82523 00:32:36.000 [2024-10-09 14:01:42.423542] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:32:36.258 00:32:36.258 real 0m10.044s 00:32:36.258 user 0m17.442s 00:32:36.258 sys 0m2.067s 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.258 ************************************ 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:36.258 END TEST raid_state_function_test 00:32:36.258 ************************************ 00:32:36.258 14:01:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:32:36.258 14:01:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:36.258 14:01:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.258 14:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:36.258 ************************************ 00:32:36.258 START TEST raid_state_function_test_sb 00:32:36.258 ************************************ 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:36.258 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:32:36.259 Process raid pid: 83179 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83179 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83179' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83179 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83179 ']' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:36.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:36.259 14:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:36.517 [2024-10-09 14:01:42.851928] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:36.517 [2024-10-09 14:01:42.853105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.517 [2024-10-09 14:01:43.033672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.774 [2024-10-09 14:01:43.079530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.774 [2024-10-09 14:01:43.123622] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:36.774 [2024-10-09 14:01:43.123661] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.340 [2024-10-09 14:01:43.798777] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:37.340 [2024-10-09 14:01:43.798830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:37.340 [2024-10-09 14:01:43.798853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:37.340 [2024-10-09 14:01:43.798867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:37.340 [2024-10-09 14:01:43.798875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:37.340 [2024-10-09 14:01:43.798891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:37.340 [2024-10-09 14:01:43.798899] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:37.340 [2024-10-09 14:01:43.798911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:37.340 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.341 "name": "Existed_Raid", 00:32:37.341 "uuid": "e54b75f1-6c79-473d-9666-4c8c9c589e57", 00:32:37.341 "strip_size_kb": 64, 00:32:37.341 "state": "configuring", 00:32:37.341 "raid_level": "concat", 00:32:37.341 "superblock": true, 00:32:37.341 "num_base_bdevs": 4, 00:32:37.341 "num_base_bdevs_discovered": 0, 00:32:37.341 "num_base_bdevs_operational": 4, 00:32:37.341 "base_bdevs_list": [ 00:32:37.341 { 00:32:37.341 "name": "BaseBdev1", 00:32:37.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.341 "is_configured": false, 00:32:37.341 "data_offset": 0, 00:32:37.341 "data_size": 0 00:32:37.341 }, 00:32:37.341 { 00:32:37.341 "name": "BaseBdev2", 00:32:37.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.341 "is_configured": false, 00:32:37.341 "data_offset": 0, 00:32:37.341 "data_size": 0 00:32:37.341 }, 00:32:37.341 { 00:32:37.341 "name": "BaseBdev3", 00:32:37.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.341 "is_configured": false, 00:32:37.341 "data_offset": 0, 00:32:37.341 "data_size": 0 00:32:37.341 }, 00:32:37.341 { 00:32:37.341 "name": "BaseBdev4", 00:32:37.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.341 "is_configured": false, 00:32:37.341 "data_offset": 0, 00:32:37.341 "data_size": 0 00:32:37.341 } 00:32:37.341 ] 00:32:37.341 }' 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.341 14:01:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 [2024-10-09 14:01:44.226771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:37.908 [2024-10-09 14:01:44.226829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 [2024-10-09 14:01:44.234808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:37.908 [2024-10-09 14:01:44.234853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:37.908 [2024-10-09 14:01:44.234863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:37.908 [2024-10-09 14:01:44.234876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:37.908 [2024-10-09 14:01:44.234883] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:37.908 [2024-10-09 14:01:44.234896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:37.908 [2024-10-09 14:01:44.234903] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:37.908 [2024-10-09 14:01:44.234915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 [2024-10-09 14:01:44.252268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:37.908 BaseBdev1 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 [ 00:32:37.908 { 00:32:37.908 "name": "BaseBdev1", 00:32:37.908 "aliases": [ 00:32:37.908 "684efb9e-df66-4657-a675-4b25ed61d05a" 00:32:37.908 ], 00:32:37.908 "product_name": "Malloc disk", 00:32:37.908 "block_size": 512, 00:32:37.908 "num_blocks": 65536, 00:32:37.908 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:37.908 "assigned_rate_limits": { 00:32:37.908 "rw_ios_per_sec": 0, 00:32:37.908 "rw_mbytes_per_sec": 0, 00:32:37.908 "r_mbytes_per_sec": 0, 00:32:37.908 "w_mbytes_per_sec": 0 00:32:37.908 }, 00:32:37.908 "claimed": true, 00:32:37.908 "claim_type": "exclusive_write", 00:32:37.908 "zoned": false, 00:32:37.908 "supported_io_types": { 00:32:37.908 "read": true, 00:32:37.908 "write": true, 00:32:37.908 "unmap": true, 00:32:37.908 "flush": true, 00:32:37.908 "reset": true, 00:32:37.908 "nvme_admin": false, 00:32:37.908 "nvme_io": false, 00:32:37.908 "nvme_io_md": false, 00:32:37.908 "write_zeroes": true, 00:32:37.908 "zcopy": true, 00:32:37.908 "get_zone_info": false, 00:32:37.908 "zone_management": false, 00:32:37.908 "zone_append": false, 00:32:37.908 "compare": false, 00:32:37.908 "compare_and_write": false, 00:32:37.908 "abort": true, 00:32:37.908 "seek_hole": false, 00:32:37.908 "seek_data": false, 00:32:37.908 "copy": true, 00:32:37.908 "nvme_iov_md": false 00:32:37.908 }, 00:32:37.908 "memory_domains": [ 00:32:37.908 { 00:32:37.908 "dma_device_id": "system", 00:32:37.908 "dma_device_type": 1 00:32:37.908 }, 00:32:37.908 { 00:32:37.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.908 "dma_device_type": 2 00:32:37.908 } 00:32:37.908 ], 00:32:37.908 "driver_specific": {} 00:32:37.908 } 00:32:37.908 ] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.908 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:37.908 "name": "Existed_Raid", 00:32:37.908 "uuid": "851da6c8-baed-4a70-b4d9-efe4eb1ae750", 00:32:37.908 "strip_size_kb": 64, 00:32:37.908 "state": "configuring", 00:32:37.909 "raid_level": "concat", 00:32:37.909 "superblock": true, 00:32:37.909 "num_base_bdevs": 4, 00:32:37.909 "num_base_bdevs_discovered": 1, 00:32:37.909 "num_base_bdevs_operational": 4, 00:32:37.909 "base_bdevs_list": [ 00:32:37.909 { 00:32:37.909 "name": "BaseBdev1", 00:32:37.909 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:37.909 "is_configured": true, 00:32:37.909 "data_offset": 2048, 00:32:37.909 "data_size": 63488 00:32:37.909 }, 00:32:37.909 { 00:32:37.909 "name": "BaseBdev2", 00:32:37.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.909 "is_configured": false, 00:32:37.909 "data_offset": 0, 00:32:37.909 "data_size": 0 00:32:37.909 }, 00:32:37.909 { 00:32:37.909 "name": "BaseBdev3", 00:32:37.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.909 "is_configured": false, 00:32:37.909 "data_offset": 0, 00:32:37.909 "data_size": 0 00:32:37.909 }, 00:32:37.909 { 00:32:37.909 "name": "BaseBdev4", 00:32:37.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.909 "is_configured": false, 00:32:37.909 "data_offset": 0, 00:32:37.909 "data_size": 0 00:32:37.909 } 00:32:37.909 ] 00:32:37.909 }' 00:32:37.909 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:37.909 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.475 [2024-10-09 14:01:44.720426] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:38.475 [2024-10-09 14:01:44.720624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.475 [2024-10-09 14:01:44.728494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:38.475 [2024-10-09 14:01:44.730986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:38.475 [2024-10-09 14:01:44.731031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:38.475 [2024-10-09 14:01:44.731043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:38.475 [2024-10-09 14:01:44.731055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:38.475 [2024-10-09 14:01:44.731063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:38.475 [2024-10-09 14:01:44.731075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.475 "name": "Existed_Raid", 00:32:38.475 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:38.475 "strip_size_kb": 64, 00:32:38.475 "state": "configuring", 00:32:38.475 "raid_level": "concat", 00:32:38.475 "superblock": true, 00:32:38.475 "num_base_bdevs": 4, 00:32:38.475 "num_base_bdevs_discovered": 1, 00:32:38.475 "num_base_bdevs_operational": 4, 00:32:38.475 "base_bdevs_list": [ 00:32:38.475 { 00:32:38.475 "name": "BaseBdev1", 00:32:38.475 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:38.475 "is_configured": true, 00:32:38.475 "data_offset": 2048, 00:32:38.475 "data_size": 63488 00:32:38.475 }, 00:32:38.475 { 00:32:38.475 "name": "BaseBdev2", 00:32:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.475 "is_configured": false, 00:32:38.475 "data_offset": 0, 00:32:38.475 "data_size": 0 00:32:38.475 }, 00:32:38.475 { 00:32:38.475 "name": "BaseBdev3", 00:32:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.475 "is_configured": false, 00:32:38.475 "data_offset": 0, 00:32:38.475 "data_size": 0 00:32:38.475 }, 00:32:38.475 { 00:32:38.475 "name": "BaseBdev4", 00:32:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.475 "is_configured": false, 00:32:38.475 "data_offset": 0, 00:32:38.475 "data_size": 0 00:32:38.475 } 00:32:38.475 ] 00:32:38.475 }' 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.475 14:01:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.734 [2024-10-09 14:01:45.151125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:38.734 BaseBdev2 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.734 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.734 [ 00:32:38.734 { 00:32:38.734 "name": "BaseBdev2", 00:32:38.734 "aliases": [ 00:32:38.734 "95c75dfc-6ac1-4820-8892-f2bb953fbbd1" 00:32:38.734 ], 00:32:38.734 "product_name": "Malloc disk", 00:32:38.734 "block_size": 512, 00:32:38.734 "num_blocks": 65536, 00:32:38.734 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:38.734 "assigned_rate_limits": { 00:32:38.734 "rw_ios_per_sec": 0, 00:32:38.734 "rw_mbytes_per_sec": 0, 00:32:38.734 "r_mbytes_per_sec": 0, 00:32:38.734 "w_mbytes_per_sec": 0 00:32:38.734 }, 00:32:38.734 "claimed": true, 00:32:38.734 "claim_type": "exclusive_write", 00:32:38.734 "zoned": false, 00:32:38.734 "supported_io_types": { 00:32:38.734 "read": true, 00:32:38.734 "write": true, 00:32:38.734 "unmap": true, 00:32:38.734 "flush": true, 00:32:38.734 "reset": true, 00:32:38.734 "nvme_admin": false, 00:32:38.734 "nvme_io": false, 00:32:38.734 "nvme_io_md": false, 00:32:38.734 "write_zeroes": true, 00:32:38.734 "zcopy": true, 00:32:38.734 "get_zone_info": false, 00:32:38.734 "zone_management": false, 00:32:38.734 "zone_append": false, 00:32:38.734 "compare": false, 00:32:38.734 "compare_and_write": false, 00:32:38.734 "abort": true, 00:32:38.734 "seek_hole": false, 00:32:38.734 "seek_data": false, 00:32:38.734 "copy": true, 00:32:38.734 "nvme_iov_md": false 00:32:38.734 }, 00:32:38.734 "memory_domains": [ 00:32:38.734 { 00:32:38.734 "dma_device_id": "system", 00:32:38.734 "dma_device_type": 1 00:32:38.734 }, 00:32:38.734 { 00:32:38.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:38.734 "dma_device_type": 2 00:32:38.734 } 00:32:38.734 ], 00:32:38.734 "driver_specific": {} 00:32:38.735 } 00:32:38.735 ] 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.735 "name": "Existed_Raid", 00:32:38.735 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:38.735 "strip_size_kb": 64, 00:32:38.735 "state": "configuring", 00:32:38.735 "raid_level": "concat", 00:32:38.735 "superblock": true, 00:32:38.735 "num_base_bdevs": 4, 00:32:38.735 "num_base_bdevs_discovered": 2, 00:32:38.735 "num_base_bdevs_operational": 4, 00:32:38.735 "base_bdevs_list": [ 00:32:38.735 { 00:32:38.735 "name": "BaseBdev1", 00:32:38.735 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:38.735 "is_configured": true, 00:32:38.735 "data_offset": 2048, 00:32:38.735 "data_size": 63488 00:32:38.735 }, 00:32:38.735 { 00:32:38.735 "name": "BaseBdev2", 00:32:38.735 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:38.735 "is_configured": true, 00:32:38.735 "data_offset": 2048, 00:32:38.735 "data_size": 63488 00:32:38.735 }, 00:32:38.735 { 00:32:38.735 "name": "BaseBdev3", 00:32:38.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.735 "is_configured": false, 00:32:38.735 "data_offset": 0, 00:32:38.735 "data_size": 0 00:32:38.735 }, 00:32:38.735 { 00:32:38.735 "name": "BaseBdev4", 00:32:38.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.735 "is_configured": false, 00:32:38.735 "data_offset": 0, 00:32:38.735 "data_size": 0 00:32:38.735 } 00:32:38.735 ] 00:32:38.735 }' 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.735 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 [2024-10-09 14:01:45.626471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:39.301 BaseBdev3 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.301 [ 00:32:39.301 { 00:32:39.301 "name": "BaseBdev3", 00:32:39.301 "aliases": [ 00:32:39.301 "969bb2e6-50af-4390-a908-7098c3b1e310" 00:32:39.301 ], 00:32:39.301 "product_name": "Malloc disk", 00:32:39.301 "block_size": 512, 00:32:39.301 "num_blocks": 65536, 00:32:39.301 "uuid": "969bb2e6-50af-4390-a908-7098c3b1e310", 00:32:39.301 "assigned_rate_limits": { 00:32:39.301 "rw_ios_per_sec": 0, 00:32:39.301 "rw_mbytes_per_sec": 0, 00:32:39.301 "r_mbytes_per_sec": 0, 00:32:39.301 "w_mbytes_per_sec": 0 00:32:39.301 }, 00:32:39.301 "claimed": true, 00:32:39.301 "claim_type": "exclusive_write", 00:32:39.301 "zoned": false, 00:32:39.301 "supported_io_types": { 00:32:39.301 "read": true, 00:32:39.301 "write": true, 00:32:39.301 "unmap": true, 00:32:39.301 "flush": true, 00:32:39.301 "reset": true, 00:32:39.301 "nvme_admin": false, 00:32:39.301 "nvme_io": false, 00:32:39.301 "nvme_io_md": false, 00:32:39.301 "write_zeroes": true, 00:32:39.301 "zcopy": true, 00:32:39.301 "get_zone_info": false, 00:32:39.301 "zone_management": false, 00:32:39.301 "zone_append": false, 00:32:39.301 "compare": false, 00:32:39.301 "compare_and_write": false, 00:32:39.301 "abort": true, 00:32:39.301 "seek_hole": false, 00:32:39.301 "seek_data": false, 00:32:39.301 "copy": true, 00:32:39.301 "nvme_iov_md": false 00:32:39.301 }, 00:32:39.301 "memory_domains": [ 00:32:39.301 { 00:32:39.301 "dma_device_id": "system", 00:32:39.301 "dma_device_type": 1 00:32:39.301 }, 00:32:39.301 { 00:32:39.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.301 "dma_device_type": 2 00:32:39.301 } 00:32:39.301 ], 00:32:39.301 "driver_specific": {} 00:32:39.301 } 00:32:39.301 ] 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:39.301 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.302 "name": "Existed_Raid", 00:32:39.302 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:39.302 "strip_size_kb": 64, 00:32:39.302 "state": "configuring", 00:32:39.302 "raid_level": "concat", 00:32:39.302 "superblock": true, 00:32:39.302 "num_base_bdevs": 4, 00:32:39.302 "num_base_bdevs_discovered": 3, 00:32:39.302 "num_base_bdevs_operational": 4, 00:32:39.302 "base_bdevs_list": [ 00:32:39.302 { 00:32:39.302 "name": "BaseBdev1", 00:32:39.302 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:39.302 "is_configured": true, 00:32:39.302 "data_offset": 2048, 00:32:39.302 "data_size": 63488 00:32:39.302 }, 00:32:39.302 { 00:32:39.302 "name": "BaseBdev2", 00:32:39.302 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:39.302 "is_configured": true, 00:32:39.302 "data_offset": 2048, 00:32:39.302 "data_size": 63488 00:32:39.302 }, 00:32:39.302 { 00:32:39.302 "name": "BaseBdev3", 00:32:39.302 "uuid": "969bb2e6-50af-4390-a908-7098c3b1e310", 00:32:39.302 "is_configured": true, 00:32:39.302 "data_offset": 2048, 00:32:39.302 "data_size": 63488 00:32:39.302 }, 00:32:39.302 { 00:32:39.302 "name": "BaseBdev4", 00:32:39.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.302 "is_configured": false, 00:32:39.302 "data_offset": 0, 00:32:39.302 "data_size": 0 00:32:39.302 } 00:32:39.302 ] 00:32:39.302 }' 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.302 14:01:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.560 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:39.560 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.560 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.819 [2024-10-09 14:01:46.113792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:39.819 [2024-10-09 14:01:46.113996] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:32:39.819 [2024-10-09 14:01:46.114012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:39.819 [2024-10-09 14:01:46.114310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:39.819 BaseBdev4 00:32:39.819 [2024-10-09 14:01:46.114438] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:32:39.819 [2024-10-09 14:01:46.114452] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:32:39.819 [2024-10-09 14:01:46.114591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.819 [ 00:32:39.819 { 00:32:39.819 "name": "BaseBdev4", 00:32:39.819 "aliases": [ 00:32:39.819 "8772b083-82ee-4945-89b3-9b9b028d26bb" 00:32:39.819 ], 00:32:39.819 "product_name": "Malloc disk", 00:32:39.819 "block_size": 512, 00:32:39.819 "num_blocks": 65536, 00:32:39.819 "uuid": "8772b083-82ee-4945-89b3-9b9b028d26bb", 00:32:39.819 "assigned_rate_limits": { 00:32:39.819 "rw_ios_per_sec": 0, 00:32:39.819 "rw_mbytes_per_sec": 0, 00:32:39.819 "r_mbytes_per_sec": 0, 00:32:39.819 "w_mbytes_per_sec": 0 00:32:39.819 }, 00:32:39.819 "claimed": true, 00:32:39.819 "claim_type": "exclusive_write", 00:32:39.819 "zoned": false, 00:32:39.819 "supported_io_types": { 00:32:39.819 "read": true, 00:32:39.819 "write": true, 00:32:39.819 "unmap": true, 00:32:39.819 "flush": true, 00:32:39.819 "reset": true, 00:32:39.819 "nvme_admin": false, 00:32:39.819 "nvme_io": false, 00:32:39.819 "nvme_io_md": false, 00:32:39.819 "write_zeroes": true, 00:32:39.819 "zcopy": true, 00:32:39.819 "get_zone_info": false, 00:32:39.819 "zone_management": false, 00:32:39.819 "zone_append": false, 00:32:39.819 "compare": false, 00:32:39.819 "compare_and_write": false, 00:32:39.819 "abort": true, 00:32:39.819 "seek_hole": false, 00:32:39.819 "seek_data": false, 00:32:39.819 "copy": true, 00:32:39.819 "nvme_iov_md": false 00:32:39.819 }, 00:32:39.819 "memory_domains": [ 00:32:39.819 { 00:32:39.819 "dma_device_id": "system", 00:32:39.819 "dma_device_type": 1 00:32:39.819 }, 00:32:39.819 { 00:32:39.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:39.819 "dma_device_type": 2 00:32:39.819 } 00:32:39.819 ], 00:32:39.819 "driver_specific": {} 00:32:39.819 } 00:32:39.819 ] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:39.819 "name": "Existed_Raid", 00:32:39.819 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:39.819 "strip_size_kb": 64, 00:32:39.819 "state": "online", 00:32:39.819 "raid_level": "concat", 00:32:39.819 "superblock": true, 00:32:39.819 "num_base_bdevs": 4, 00:32:39.819 "num_base_bdevs_discovered": 4, 00:32:39.819 "num_base_bdevs_operational": 4, 00:32:39.819 "base_bdevs_list": [ 00:32:39.819 { 00:32:39.819 "name": "BaseBdev1", 00:32:39.819 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:39.819 "is_configured": true, 00:32:39.819 "data_offset": 2048, 00:32:39.819 "data_size": 63488 00:32:39.819 }, 00:32:39.819 { 00:32:39.819 "name": "BaseBdev2", 00:32:39.819 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:39.819 "is_configured": true, 00:32:39.819 "data_offset": 2048, 00:32:39.819 "data_size": 63488 00:32:39.819 }, 00:32:39.819 { 00:32:39.819 "name": "BaseBdev3", 00:32:39.819 "uuid": "969bb2e6-50af-4390-a908-7098c3b1e310", 00:32:39.819 "is_configured": true, 00:32:39.819 "data_offset": 2048, 00:32:39.819 "data_size": 63488 00:32:39.819 }, 00:32:39.819 { 00:32:39.819 "name": "BaseBdev4", 00:32:39.819 "uuid": "8772b083-82ee-4945-89b3-9b9b028d26bb", 00:32:39.819 "is_configured": true, 00:32:39.819 "data_offset": 2048, 00:32:39.819 "data_size": 63488 00:32:39.819 } 00:32:39.819 ] 00:32:39.819 }' 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:39.819 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:40.078 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.078 [2024-10-09 14:01:46.622332] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:40.336 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.336 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:40.336 "name": "Existed_Raid", 00:32:40.336 "aliases": [ 00:32:40.336 "00e36243-63f2-4fcb-abb8-66f17b3f26b0" 00:32:40.336 ], 00:32:40.336 "product_name": "Raid Volume", 00:32:40.336 "block_size": 512, 00:32:40.336 "num_blocks": 253952, 00:32:40.336 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:40.336 "assigned_rate_limits": { 00:32:40.336 "rw_ios_per_sec": 0, 00:32:40.336 "rw_mbytes_per_sec": 0, 00:32:40.336 "r_mbytes_per_sec": 0, 00:32:40.336 "w_mbytes_per_sec": 0 00:32:40.336 }, 00:32:40.336 "claimed": false, 00:32:40.336 "zoned": false, 00:32:40.336 "supported_io_types": { 00:32:40.336 "read": true, 00:32:40.336 "write": true, 00:32:40.336 "unmap": true, 00:32:40.336 "flush": true, 00:32:40.336 "reset": true, 00:32:40.336 "nvme_admin": false, 00:32:40.336 "nvme_io": false, 00:32:40.336 "nvme_io_md": false, 00:32:40.336 "write_zeroes": true, 00:32:40.336 "zcopy": false, 00:32:40.336 "get_zone_info": false, 00:32:40.336 "zone_management": false, 00:32:40.336 "zone_append": false, 00:32:40.336 "compare": false, 00:32:40.336 "compare_and_write": false, 00:32:40.336 "abort": false, 00:32:40.336 "seek_hole": false, 00:32:40.336 "seek_data": false, 00:32:40.336 "copy": false, 00:32:40.336 "nvme_iov_md": false 00:32:40.336 }, 00:32:40.336 "memory_domains": [ 00:32:40.336 { 00:32:40.336 "dma_device_id": "system", 00:32:40.336 "dma_device_type": 1 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.336 "dma_device_type": 2 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "system", 00:32:40.336 "dma_device_type": 1 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.336 "dma_device_type": 2 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "system", 00:32:40.336 "dma_device_type": 1 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.336 "dma_device_type": 2 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "system", 00:32:40.336 "dma_device_type": 1 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.336 "dma_device_type": 2 00:32:40.336 } 00:32:40.336 ], 00:32:40.336 "driver_specific": { 00:32:40.336 "raid": { 00:32:40.336 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:40.336 "strip_size_kb": 64, 00:32:40.336 "state": "online", 00:32:40.336 "raid_level": "concat", 00:32:40.336 "superblock": true, 00:32:40.336 "num_base_bdevs": 4, 00:32:40.336 "num_base_bdevs_discovered": 4, 00:32:40.336 "num_base_bdevs_operational": 4, 00:32:40.336 "base_bdevs_list": [ 00:32:40.336 { 00:32:40.336 "name": "BaseBdev1", 00:32:40.336 "uuid": "684efb9e-df66-4657-a675-4b25ed61d05a", 00:32:40.336 "is_configured": true, 00:32:40.336 "data_offset": 2048, 00:32:40.336 "data_size": 63488 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "name": "BaseBdev2", 00:32:40.336 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:40.336 "is_configured": true, 00:32:40.336 "data_offset": 2048, 00:32:40.336 "data_size": 63488 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "name": "BaseBdev3", 00:32:40.336 "uuid": "969bb2e6-50af-4390-a908-7098c3b1e310", 00:32:40.336 "is_configured": true, 00:32:40.336 "data_offset": 2048, 00:32:40.336 "data_size": 63488 00:32:40.336 }, 00:32:40.336 { 00:32:40.336 "name": "BaseBdev4", 00:32:40.336 "uuid": "8772b083-82ee-4945-89b3-9b9b028d26bb", 00:32:40.336 "is_configured": true, 00:32:40.336 "data_offset": 2048, 00:32:40.337 "data_size": 63488 00:32:40.337 } 00:32:40.337 ] 00:32:40.337 } 00:32:40.337 } 00:32:40.337 }' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:32:40.337 BaseBdev2 00:32:40.337 BaseBdev3 00:32:40.337 BaseBdev4' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.337 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.595 [2024-10-09 14:01:46.942035] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:40.595 [2024-10-09 14:01:46.942067] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:40.595 [2024-10-09 14:01:46.942121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:40.595 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.596 14:01:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:40.596 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.596 "name": "Existed_Raid", 00:32:40.596 "uuid": "00e36243-63f2-4fcb-abb8-66f17b3f26b0", 00:32:40.596 "strip_size_kb": 64, 00:32:40.596 "state": "offline", 00:32:40.596 "raid_level": "concat", 00:32:40.596 "superblock": true, 00:32:40.596 "num_base_bdevs": 4, 00:32:40.596 "num_base_bdevs_discovered": 3, 00:32:40.596 "num_base_bdevs_operational": 3, 00:32:40.596 "base_bdevs_list": [ 00:32:40.596 { 00:32:40.596 "name": null, 00:32:40.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.596 "is_configured": false, 00:32:40.596 "data_offset": 0, 00:32:40.596 "data_size": 63488 00:32:40.596 }, 00:32:40.596 { 00:32:40.596 "name": "BaseBdev2", 00:32:40.596 "uuid": "95c75dfc-6ac1-4820-8892-f2bb953fbbd1", 00:32:40.596 "is_configured": true, 00:32:40.596 "data_offset": 2048, 00:32:40.596 "data_size": 63488 00:32:40.596 }, 00:32:40.596 { 00:32:40.596 "name": "BaseBdev3", 00:32:40.596 "uuid": "969bb2e6-50af-4390-a908-7098c3b1e310", 00:32:40.596 "is_configured": true, 00:32:40.596 "data_offset": 2048, 00:32:40.596 "data_size": 63488 00:32:40.596 }, 00:32:40.596 { 00:32:40.596 "name": "BaseBdev4", 00:32:40.596 "uuid": "8772b083-82ee-4945-89b3-9b9b028d26bb", 00:32:40.596 "is_configured": true, 00:32:40.596 "data_offset": 2048, 00:32:40.596 "data_size": 63488 00:32:40.596 } 00:32:40.596 ] 00:32:40.596 }' 00:32:40.596 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.596 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:40.854 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:32:40.854 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.113 [2024-10-09 14:01:47.454295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.113 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.114 [2024-10-09 14:01:47.526531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.114 [2024-10-09 14:01:47.590706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:41.114 [2024-10-09 14:01:47.590749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.114 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 BaseBdev2 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 [ 00:32:41.373 { 00:32:41.373 "name": "BaseBdev2", 00:32:41.373 "aliases": [ 00:32:41.373 "cd8473ad-32c0-4595-873b-6ce0ad733478" 00:32:41.373 ], 00:32:41.373 "product_name": "Malloc disk", 00:32:41.373 "block_size": 512, 00:32:41.373 "num_blocks": 65536, 00:32:41.373 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:41.373 "assigned_rate_limits": { 00:32:41.373 "rw_ios_per_sec": 0, 00:32:41.373 "rw_mbytes_per_sec": 0, 00:32:41.373 "r_mbytes_per_sec": 0, 00:32:41.373 "w_mbytes_per_sec": 0 00:32:41.373 }, 00:32:41.373 "claimed": false, 00:32:41.373 "zoned": false, 00:32:41.373 "supported_io_types": { 00:32:41.373 "read": true, 00:32:41.373 "write": true, 00:32:41.373 "unmap": true, 00:32:41.373 "flush": true, 00:32:41.373 "reset": true, 00:32:41.373 "nvme_admin": false, 00:32:41.373 "nvme_io": false, 00:32:41.373 "nvme_io_md": false, 00:32:41.373 "write_zeroes": true, 00:32:41.373 "zcopy": true, 00:32:41.373 "get_zone_info": false, 00:32:41.373 "zone_management": false, 00:32:41.373 "zone_append": false, 00:32:41.373 "compare": false, 00:32:41.373 "compare_and_write": false, 00:32:41.373 "abort": true, 00:32:41.373 "seek_hole": false, 00:32:41.373 "seek_data": false, 00:32:41.373 "copy": true, 00:32:41.373 "nvme_iov_md": false 00:32:41.373 }, 00:32:41.373 "memory_domains": [ 00:32:41.373 { 00:32:41.373 "dma_device_id": "system", 00:32:41.373 "dma_device_type": 1 00:32:41.373 }, 00:32:41.373 { 00:32:41.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.373 "dma_device_type": 2 00:32:41.373 } 00:32:41.373 ], 00:32:41.373 "driver_specific": {} 00:32:41.373 } 00:32:41.373 ] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 BaseBdev3 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.373 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.373 [ 00:32:41.373 { 00:32:41.373 "name": "BaseBdev3", 00:32:41.373 "aliases": [ 00:32:41.373 "2460cc0a-72f0-429a-94a9-1acdbcb51dfa" 00:32:41.373 ], 00:32:41.373 "product_name": "Malloc disk", 00:32:41.373 "block_size": 512, 00:32:41.373 "num_blocks": 65536, 00:32:41.373 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:41.373 "assigned_rate_limits": { 00:32:41.373 "rw_ios_per_sec": 0, 00:32:41.373 "rw_mbytes_per_sec": 0, 00:32:41.373 "r_mbytes_per_sec": 0, 00:32:41.373 "w_mbytes_per_sec": 0 00:32:41.373 }, 00:32:41.373 "claimed": false, 00:32:41.373 "zoned": false, 00:32:41.373 "supported_io_types": { 00:32:41.373 "read": true, 00:32:41.373 "write": true, 00:32:41.373 "unmap": true, 00:32:41.373 "flush": true, 00:32:41.373 "reset": true, 00:32:41.373 "nvme_admin": false, 00:32:41.373 "nvme_io": false, 00:32:41.374 "nvme_io_md": false, 00:32:41.374 "write_zeroes": true, 00:32:41.374 "zcopy": true, 00:32:41.374 "get_zone_info": false, 00:32:41.374 "zone_management": false, 00:32:41.374 "zone_append": false, 00:32:41.374 "compare": false, 00:32:41.374 "compare_and_write": false, 00:32:41.374 "abort": true, 00:32:41.374 "seek_hole": false, 00:32:41.374 "seek_data": false, 00:32:41.374 "copy": true, 00:32:41.374 "nvme_iov_md": false 00:32:41.374 }, 00:32:41.374 "memory_domains": [ 00:32:41.374 { 00:32:41.374 "dma_device_id": "system", 00:32:41.374 "dma_device_type": 1 00:32:41.374 }, 00:32:41.374 { 00:32:41.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.374 "dma_device_type": 2 00:32:41.374 } 00:32:41.374 ], 00:32:41.374 "driver_specific": {} 00:32:41.374 } 00:32:41.374 ] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.374 BaseBdev4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.374 [ 00:32:41.374 { 00:32:41.374 "name": "BaseBdev4", 00:32:41.374 "aliases": [ 00:32:41.374 "6a1deb4b-eb2d-4f57-a3db-e4bf66996415" 00:32:41.374 ], 00:32:41.374 "product_name": "Malloc disk", 00:32:41.374 "block_size": 512, 00:32:41.374 "num_blocks": 65536, 00:32:41.374 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:41.374 "assigned_rate_limits": { 00:32:41.374 "rw_ios_per_sec": 0, 00:32:41.374 "rw_mbytes_per_sec": 0, 00:32:41.374 "r_mbytes_per_sec": 0, 00:32:41.374 "w_mbytes_per_sec": 0 00:32:41.374 }, 00:32:41.374 "claimed": false, 00:32:41.374 "zoned": false, 00:32:41.374 "supported_io_types": { 00:32:41.374 "read": true, 00:32:41.374 "write": true, 00:32:41.374 "unmap": true, 00:32:41.374 "flush": true, 00:32:41.374 "reset": true, 00:32:41.374 "nvme_admin": false, 00:32:41.374 "nvme_io": false, 00:32:41.374 "nvme_io_md": false, 00:32:41.374 "write_zeroes": true, 00:32:41.374 "zcopy": true, 00:32:41.374 "get_zone_info": false, 00:32:41.374 "zone_management": false, 00:32:41.374 "zone_append": false, 00:32:41.374 "compare": false, 00:32:41.374 "compare_and_write": false, 00:32:41.374 "abort": true, 00:32:41.374 "seek_hole": false, 00:32:41.374 "seek_data": false, 00:32:41.374 "copy": true, 00:32:41.374 "nvme_iov_md": false 00:32:41.374 }, 00:32:41.374 "memory_domains": [ 00:32:41.374 { 00:32:41.374 "dma_device_id": "system", 00:32:41.374 "dma_device_type": 1 00:32:41.374 }, 00:32:41.374 { 00:32:41.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.374 "dma_device_type": 2 00:32:41.374 } 00:32:41.374 ], 00:32:41.374 "driver_specific": {} 00:32:41.374 } 00:32:41.374 ] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.374 [2024-10-09 14:01:47.818072] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:41.374 [2024-10-09 14:01:47.818120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:41.374 [2024-10-09 14:01:47.818143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:41.374 [2024-10-09 14:01:47.820349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:41.374 [2024-10-09 14:01:47.820398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.374 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.374 "name": "Existed_Raid", 00:32:41.374 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:41.374 "strip_size_kb": 64, 00:32:41.374 "state": "configuring", 00:32:41.374 "raid_level": "concat", 00:32:41.374 "superblock": true, 00:32:41.374 "num_base_bdevs": 4, 00:32:41.374 "num_base_bdevs_discovered": 3, 00:32:41.374 "num_base_bdevs_operational": 4, 00:32:41.374 "base_bdevs_list": [ 00:32:41.374 { 00:32:41.374 "name": "BaseBdev1", 00:32:41.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.374 "is_configured": false, 00:32:41.374 "data_offset": 0, 00:32:41.374 "data_size": 0 00:32:41.374 }, 00:32:41.374 { 00:32:41.374 "name": "BaseBdev2", 00:32:41.374 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:41.374 "is_configured": true, 00:32:41.374 "data_offset": 2048, 00:32:41.374 "data_size": 63488 00:32:41.374 }, 00:32:41.374 { 00:32:41.374 "name": "BaseBdev3", 00:32:41.374 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:41.374 "is_configured": true, 00:32:41.374 "data_offset": 2048, 00:32:41.374 "data_size": 63488 00:32:41.374 }, 00:32:41.374 { 00:32:41.374 "name": "BaseBdev4", 00:32:41.374 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:41.374 "is_configured": true, 00:32:41.374 "data_offset": 2048, 00:32:41.374 "data_size": 63488 00:32:41.375 } 00:32:41.375 ] 00:32:41.375 }' 00:32:41.375 14:01:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.375 14:01:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.941 [2024-10-09 14:01:48.262158] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.941 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:41.941 "name": "Existed_Raid", 00:32:41.941 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:41.941 "strip_size_kb": 64, 00:32:41.941 "state": "configuring", 00:32:41.941 "raid_level": "concat", 00:32:41.941 "superblock": true, 00:32:41.941 "num_base_bdevs": 4, 00:32:41.941 "num_base_bdevs_discovered": 2, 00:32:41.941 "num_base_bdevs_operational": 4, 00:32:41.942 "base_bdevs_list": [ 00:32:41.942 { 00:32:41.942 "name": "BaseBdev1", 00:32:41.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.942 "is_configured": false, 00:32:41.942 "data_offset": 0, 00:32:41.942 "data_size": 0 00:32:41.942 }, 00:32:41.942 { 00:32:41.942 "name": null, 00:32:41.942 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:41.942 "is_configured": false, 00:32:41.942 "data_offset": 0, 00:32:41.942 "data_size": 63488 00:32:41.942 }, 00:32:41.942 { 00:32:41.942 "name": "BaseBdev3", 00:32:41.942 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:41.942 "is_configured": true, 00:32:41.942 "data_offset": 2048, 00:32:41.942 "data_size": 63488 00:32:41.942 }, 00:32:41.942 { 00:32:41.942 "name": "BaseBdev4", 00:32:41.942 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:41.942 "is_configured": true, 00:32:41.942 "data_offset": 2048, 00:32:41.942 "data_size": 63488 00:32:41.942 } 00:32:41.942 ] 00:32:41.942 }' 00:32:41.942 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:41.942 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.209 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:42.209 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.209 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.209 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.482 [2024-10-09 14:01:48.781447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:42.482 BaseBdev1 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.482 [ 00:32:42.482 { 00:32:42.482 "name": "BaseBdev1", 00:32:42.482 "aliases": [ 00:32:42.482 "282d2185-7924-41ca-b95f-aeb24283ae06" 00:32:42.482 ], 00:32:42.482 "product_name": "Malloc disk", 00:32:42.482 "block_size": 512, 00:32:42.482 "num_blocks": 65536, 00:32:42.482 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:42.482 "assigned_rate_limits": { 00:32:42.482 "rw_ios_per_sec": 0, 00:32:42.482 "rw_mbytes_per_sec": 0, 00:32:42.482 "r_mbytes_per_sec": 0, 00:32:42.482 "w_mbytes_per_sec": 0 00:32:42.482 }, 00:32:42.482 "claimed": true, 00:32:42.482 "claim_type": "exclusive_write", 00:32:42.482 "zoned": false, 00:32:42.482 "supported_io_types": { 00:32:42.482 "read": true, 00:32:42.482 "write": true, 00:32:42.482 "unmap": true, 00:32:42.482 "flush": true, 00:32:42.482 "reset": true, 00:32:42.482 "nvme_admin": false, 00:32:42.482 "nvme_io": false, 00:32:42.482 "nvme_io_md": false, 00:32:42.482 "write_zeroes": true, 00:32:42.482 "zcopy": true, 00:32:42.482 "get_zone_info": false, 00:32:42.482 "zone_management": false, 00:32:42.482 "zone_append": false, 00:32:42.482 "compare": false, 00:32:42.482 "compare_and_write": false, 00:32:42.482 "abort": true, 00:32:42.482 "seek_hole": false, 00:32:42.482 "seek_data": false, 00:32:42.482 "copy": true, 00:32:42.482 "nvme_iov_md": false 00:32:42.482 }, 00:32:42.482 "memory_domains": [ 00:32:42.482 { 00:32:42.482 "dma_device_id": "system", 00:32:42.482 "dma_device_type": 1 00:32:42.482 }, 00:32:42.482 { 00:32:42.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:42.482 "dma_device_type": 2 00:32:42.482 } 00:32:42.482 ], 00:32:42.482 "driver_specific": {} 00:32:42.482 } 00:32:42.482 ] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.482 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.483 "name": "Existed_Raid", 00:32:42.483 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:42.483 "strip_size_kb": 64, 00:32:42.483 "state": "configuring", 00:32:42.483 "raid_level": "concat", 00:32:42.483 "superblock": true, 00:32:42.483 "num_base_bdevs": 4, 00:32:42.483 "num_base_bdevs_discovered": 3, 00:32:42.483 "num_base_bdevs_operational": 4, 00:32:42.483 "base_bdevs_list": [ 00:32:42.483 { 00:32:42.483 "name": "BaseBdev1", 00:32:42.483 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:42.483 "is_configured": true, 00:32:42.483 "data_offset": 2048, 00:32:42.483 "data_size": 63488 00:32:42.483 }, 00:32:42.483 { 00:32:42.483 "name": null, 00:32:42.483 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:42.483 "is_configured": false, 00:32:42.483 "data_offset": 0, 00:32:42.483 "data_size": 63488 00:32:42.483 }, 00:32:42.483 { 00:32:42.483 "name": "BaseBdev3", 00:32:42.483 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:42.483 "is_configured": true, 00:32:42.483 "data_offset": 2048, 00:32:42.483 "data_size": 63488 00:32:42.483 }, 00:32:42.483 { 00:32:42.483 "name": "BaseBdev4", 00:32:42.483 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:42.483 "is_configured": true, 00:32:42.483 "data_offset": 2048, 00:32:42.483 "data_size": 63488 00:32:42.483 } 00:32:42.483 ] 00:32:42.483 }' 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.483 14:01:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.741 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.741 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:42.741 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.741 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.741 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.999 [2024-10-09 14:01:49.305614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:42.999 "name": "Existed_Raid", 00:32:42.999 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:42.999 "strip_size_kb": 64, 00:32:42.999 "state": "configuring", 00:32:42.999 "raid_level": "concat", 00:32:42.999 "superblock": true, 00:32:42.999 "num_base_bdevs": 4, 00:32:42.999 "num_base_bdevs_discovered": 2, 00:32:42.999 "num_base_bdevs_operational": 4, 00:32:42.999 "base_bdevs_list": [ 00:32:42.999 { 00:32:42.999 "name": "BaseBdev1", 00:32:42.999 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:42.999 "is_configured": true, 00:32:42.999 "data_offset": 2048, 00:32:42.999 "data_size": 63488 00:32:42.999 }, 00:32:42.999 { 00:32:42.999 "name": null, 00:32:42.999 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:42.999 "is_configured": false, 00:32:42.999 "data_offset": 0, 00:32:42.999 "data_size": 63488 00:32:42.999 }, 00:32:42.999 { 00:32:42.999 "name": null, 00:32:42.999 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:42.999 "is_configured": false, 00:32:42.999 "data_offset": 0, 00:32:42.999 "data_size": 63488 00:32:42.999 }, 00:32:42.999 { 00:32:42.999 "name": "BaseBdev4", 00:32:42.999 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:42.999 "is_configured": true, 00:32:42.999 "data_offset": 2048, 00:32:42.999 "data_size": 63488 00:32:42.999 } 00:32:42.999 ] 00:32:42.999 }' 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:42.999 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.258 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.258 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:43.258 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.258 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.258 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.517 [2024-10-09 14:01:49.821817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:43.517 "name": "Existed_Raid", 00:32:43.517 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:43.517 "strip_size_kb": 64, 00:32:43.517 "state": "configuring", 00:32:43.517 "raid_level": "concat", 00:32:43.517 "superblock": true, 00:32:43.517 "num_base_bdevs": 4, 00:32:43.517 "num_base_bdevs_discovered": 3, 00:32:43.517 "num_base_bdevs_operational": 4, 00:32:43.517 "base_bdevs_list": [ 00:32:43.517 { 00:32:43.517 "name": "BaseBdev1", 00:32:43.517 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:43.517 "is_configured": true, 00:32:43.517 "data_offset": 2048, 00:32:43.517 "data_size": 63488 00:32:43.517 }, 00:32:43.517 { 00:32:43.517 "name": null, 00:32:43.517 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:43.517 "is_configured": false, 00:32:43.517 "data_offset": 0, 00:32:43.517 "data_size": 63488 00:32:43.517 }, 00:32:43.517 { 00:32:43.517 "name": "BaseBdev3", 00:32:43.517 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:43.517 "is_configured": true, 00:32:43.517 "data_offset": 2048, 00:32:43.517 "data_size": 63488 00:32:43.517 }, 00:32:43.517 { 00:32:43.517 "name": "BaseBdev4", 00:32:43.517 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:43.517 "is_configured": true, 00:32:43.517 "data_offset": 2048, 00:32:43.517 "data_size": 63488 00:32:43.517 } 00:32:43.517 ] 00:32:43.517 }' 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:43.517 14:01:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.775 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.775 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:43.775 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.775 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:43.775 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.034 [2024-10-09 14:01:50.337940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.034 "name": "Existed_Raid", 00:32:44.034 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:44.034 "strip_size_kb": 64, 00:32:44.034 "state": "configuring", 00:32:44.034 "raid_level": "concat", 00:32:44.034 "superblock": true, 00:32:44.034 "num_base_bdevs": 4, 00:32:44.034 "num_base_bdevs_discovered": 2, 00:32:44.034 "num_base_bdevs_operational": 4, 00:32:44.034 "base_bdevs_list": [ 00:32:44.034 { 00:32:44.034 "name": null, 00:32:44.034 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:44.034 "is_configured": false, 00:32:44.034 "data_offset": 0, 00:32:44.034 "data_size": 63488 00:32:44.034 }, 00:32:44.034 { 00:32:44.034 "name": null, 00:32:44.034 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:44.034 "is_configured": false, 00:32:44.034 "data_offset": 0, 00:32:44.034 "data_size": 63488 00:32:44.034 }, 00:32:44.034 { 00:32:44.034 "name": "BaseBdev3", 00:32:44.034 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:44.034 "is_configured": true, 00:32:44.034 "data_offset": 2048, 00:32:44.034 "data_size": 63488 00:32:44.034 }, 00:32:44.034 { 00:32:44.034 "name": "BaseBdev4", 00:32:44.034 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:44.034 "is_configured": true, 00:32:44.034 "data_offset": 2048, 00:32:44.034 "data_size": 63488 00:32:44.034 } 00:32:44.034 ] 00:32:44.034 }' 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.034 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.292 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.292 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.292 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.292 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:44.292 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.550 [2024-10-09 14:01:50.856629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.550 "name": "Existed_Raid", 00:32:44.550 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:44.550 "strip_size_kb": 64, 00:32:44.550 "state": "configuring", 00:32:44.550 "raid_level": "concat", 00:32:44.550 "superblock": true, 00:32:44.550 "num_base_bdevs": 4, 00:32:44.550 "num_base_bdevs_discovered": 3, 00:32:44.550 "num_base_bdevs_operational": 4, 00:32:44.550 "base_bdevs_list": [ 00:32:44.550 { 00:32:44.550 "name": null, 00:32:44.550 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:44.550 "is_configured": false, 00:32:44.550 "data_offset": 0, 00:32:44.550 "data_size": 63488 00:32:44.550 }, 00:32:44.550 { 00:32:44.550 "name": "BaseBdev2", 00:32:44.550 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:44.550 "is_configured": true, 00:32:44.550 "data_offset": 2048, 00:32:44.550 "data_size": 63488 00:32:44.550 }, 00:32:44.550 { 00:32:44.550 "name": "BaseBdev3", 00:32:44.550 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:44.550 "is_configured": true, 00:32:44.550 "data_offset": 2048, 00:32:44.550 "data_size": 63488 00:32:44.550 }, 00:32:44.550 { 00:32:44.550 "name": "BaseBdev4", 00:32:44.550 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:44.550 "is_configured": true, 00:32:44.550 "data_offset": 2048, 00:32:44.550 "data_size": 63488 00:32:44.550 } 00:32:44.550 ] 00:32:44.550 }' 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.550 14:01:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.808 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:44.808 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.808 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.808 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:44.808 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 282d2185-7924-41ca-b95f-aeb24283ae06 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.066 [2024-10-09 14:01:51.427810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:45.066 [2024-10-09 14:01:51.427987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:32:45.066 [2024-10-09 14:01:51.428001] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:45.066 NewBaseBdev 00:32:45.066 [2024-10-09 14:01:51.428267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:45.066 [2024-10-09 14:01:51.428372] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:32:45.066 [2024-10-09 14:01:51.428386] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:32:45.066 [2024-10-09 14:01:51.428475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.066 [ 00:32:45.066 { 00:32:45.066 "name": "NewBaseBdev", 00:32:45.066 "aliases": [ 00:32:45.066 "282d2185-7924-41ca-b95f-aeb24283ae06" 00:32:45.066 ], 00:32:45.066 "product_name": "Malloc disk", 00:32:45.066 "block_size": 512, 00:32:45.066 "num_blocks": 65536, 00:32:45.066 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:45.066 "assigned_rate_limits": { 00:32:45.066 "rw_ios_per_sec": 0, 00:32:45.066 "rw_mbytes_per_sec": 0, 00:32:45.066 "r_mbytes_per_sec": 0, 00:32:45.066 "w_mbytes_per_sec": 0 00:32:45.066 }, 00:32:45.066 "claimed": true, 00:32:45.066 "claim_type": "exclusive_write", 00:32:45.066 "zoned": false, 00:32:45.066 "supported_io_types": { 00:32:45.066 "read": true, 00:32:45.066 "write": true, 00:32:45.066 "unmap": true, 00:32:45.066 "flush": true, 00:32:45.066 "reset": true, 00:32:45.066 "nvme_admin": false, 00:32:45.066 "nvme_io": false, 00:32:45.066 "nvme_io_md": false, 00:32:45.066 "write_zeroes": true, 00:32:45.066 "zcopy": true, 00:32:45.066 "get_zone_info": false, 00:32:45.066 "zone_management": false, 00:32:45.066 "zone_append": false, 00:32:45.066 "compare": false, 00:32:45.066 "compare_and_write": false, 00:32:45.066 "abort": true, 00:32:45.066 "seek_hole": false, 00:32:45.066 "seek_data": false, 00:32:45.066 "copy": true, 00:32:45.066 "nvme_iov_md": false 00:32:45.066 }, 00:32:45.066 "memory_domains": [ 00:32:45.066 { 00:32:45.066 "dma_device_id": "system", 00:32:45.066 "dma_device_type": 1 00:32:45.066 }, 00:32:45.066 { 00:32:45.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.066 "dma_device_type": 2 00:32:45.066 } 00:32:45.066 ], 00:32:45.066 "driver_specific": {} 00:32:45.066 } 00:32:45.066 ] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.066 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.067 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:45.067 "name": "Existed_Raid", 00:32:45.067 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:45.067 "strip_size_kb": 64, 00:32:45.067 "state": "online", 00:32:45.067 "raid_level": "concat", 00:32:45.067 "superblock": true, 00:32:45.067 "num_base_bdevs": 4, 00:32:45.067 "num_base_bdevs_discovered": 4, 00:32:45.067 "num_base_bdevs_operational": 4, 00:32:45.067 "base_bdevs_list": [ 00:32:45.067 { 00:32:45.067 "name": "NewBaseBdev", 00:32:45.067 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:45.067 "is_configured": true, 00:32:45.067 "data_offset": 2048, 00:32:45.067 "data_size": 63488 00:32:45.067 }, 00:32:45.067 { 00:32:45.067 "name": "BaseBdev2", 00:32:45.067 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:45.067 "is_configured": true, 00:32:45.067 "data_offset": 2048, 00:32:45.067 "data_size": 63488 00:32:45.067 }, 00:32:45.067 { 00:32:45.067 "name": "BaseBdev3", 00:32:45.067 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:45.067 "is_configured": true, 00:32:45.067 "data_offset": 2048, 00:32:45.067 "data_size": 63488 00:32:45.067 }, 00:32:45.067 { 00:32:45.067 "name": "BaseBdev4", 00:32:45.067 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:45.067 "is_configured": true, 00:32:45.067 "data_offset": 2048, 00:32:45.067 "data_size": 63488 00:32:45.067 } 00:32:45.067 ] 00:32:45.067 }' 00:32:45.067 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:45.067 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.633 [2024-10-09 14:01:51.940297] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.633 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:45.633 "name": "Existed_Raid", 00:32:45.633 "aliases": [ 00:32:45.633 "b1c6be58-4158-4e4d-b656-b3d321e32934" 00:32:45.633 ], 00:32:45.633 "product_name": "Raid Volume", 00:32:45.633 "block_size": 512, 00:32:45.633 "num_blocks": 253952, 00:32:45.633 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:45.633 "assigned_rate_limits": { 00:32:45.633 "rw_ios_per_sec": 0, 00:32:45.633 "rw_mbytes_per_sec": 0, 00:32:45.633 "r_mbytes_per_sec": 0, 00:32:45.633 "w_mbytes_per_sec": 0 00:32:45.633 }, 00:32:45.633 "claimed": false, 00:32:45.633 "zoned": false, 00:32:45.633 "supported_io_types": { 00:32:45.633 "read": true, 00:32:45.633 "write": true, 00:32:45.633 "unmap": true, 00:32:45.633 "flush": true, 00:32:45.633 "reset": true, 00:32:45.633 "nvme_admin": false, 00:32:45.633 "nvme_io": false, 00:32:45.633 "nvme_io_md": false, 00:32:45.633 "write_zeroes": true, 00:32:45.633 "zcopy": false, 00:32:45.633 "get_zone_info": false, 00:32:45.633 "zone_management": false, 00:32:45.633 "zone_append": false, 00:32:45.633 "compare": false, 00:32:45.633 "compare_and_write": false, 00:32:45.633 "abort": false, 00:32:45.633 "seek_hole": false, 00:32:45.633 "seek_data": false, 00:32:45.633 "copy": false, 00:32:45.633 "nvme_iov_md": false 00:32:45.633 }, 00:32:45.633 "memory_domains": [ 00:32:45.633 { 00:32:45.633 "dma_device_id": "system", 00:32:45.633 "dma_device_type": 1 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.633 "dma_device_type": 2 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "system", 00:32:45.633 "dma_device_type": 1 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.633 "dma_device_type": 2 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "system", 00:32:45.633 "dma_device_type": 1 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.633 "dma_device_type": 2 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "system", 00:32:45.633 "dma_device_type": 1 00:32:45.633 }, 00:32:45.633 { 00:32:45.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:45.633 "dma_device_type": 2 00:32:45.633 } 00:32:45.633 ], 00:32:45.633 "driver_specific": { 00:32:45.634 "raid": { 00:32:45.634 "uuid": "b1c6be58-4158-4e4d-b656-b3d321e32934", 00:32:45.634 "strip_size_kb": 64, 00:32:45.634 "state": "online", 00:32:45.634 "raid_level": "concat", 00:32:45.634 "superblock": true, 00:32:45.634 "num_base_bdevs": 4, 00:32:45.634 "num_base_bdevs_discovered": 4, 00:32:45.634 "num_base_bdevs_operational": 4, 00:32:45.634 "base_bdevs_list": [ 00:32:45.634 { 00:32:45.634 "name": "NewBaseBdev", 00:32:45.634 "uuid": "282d2185-7924-41ca-b95f-aeb24283ae06", 00:32:45.634 "is_configured": true, 00:32:45.634 "data_offset": 2048, 00:32:45.634 "data_size": 63488 00:32:45.634 }, 00:32:45.634 { 00:32:45.634 "name": "BaseBdev2", 00:32:45.634 "uuid": "cd8473ad-32c0-4595-873b-6ce0ad733478", 00:32:45.634 "is_configured": true, 00:32:45.634 "data_offset": 2048, 00:32:45.634 "data_size": 63488 00:32:45.634 }, 00:32:45.634 { 00:32:45.634 "name": "BaseBdev3", 00:32:45.634 "uuid": "2460cc0a-72f0-429a-94a9-1acdbcb51dfa", 00:32:45.634 "is_configured": true, 00:32:45.634 "data_offset": 2048, 00:32:45.634 "data_size": 63488 00:32:45.634 }, 00:32:45.634 { 00:32:45.634 "name": "BaseBdev4", 00:32:45.634 "uuid": "6a1deb4b-eb2d-4f57-a3db-e4bf66996415", 00:32:45.634 "is_configured": true, 00:32:45.634 "data_offset": 2048, 00:32:45.634 "data_size": 63488 00:32:45.634 } 00:32:45.634 ] 00:32:45.634 } 00:32:45.634 } 00:32:45.634 }' 00:32:45.634 14:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:32:45.634 BaseBdev2 00:32:45.634 BaseBdev3 00:32:45.634 BaseBdev4' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.634 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:45.893 [2024-10-09 14:01:52.260011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:45.893 [2024-10-09 14:01:52.260146] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:45.893 [2024-10-09 14:01:52.260229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:45.893 [2024-10-09 14:01:52.260299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:45.893 [2024-10-09 14:01:52.260320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83179 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83179 ']' 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83179 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83179 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:45.893 killing process with pid 83179 00:32:45.893 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83179' 00:32:45.894 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83179 00:32:45.894 [2024-10-09 14:01:52.305530] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:45.894 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83179 00:32:45.894 [2024-10-09 14:01:52.346988] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:46.152 ************************************ 00:32:46.152 END TEST raid_state_function_test_sb 00:32:46.152 ************************************ 00:32:46.152 14:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:32:46.152 00:32:46.152 real 0m9.860s 00:32:46.152 user 0m17.013s 00:32:46.152 sys 0m2.142s 00:32:46.152 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:46.152 14:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:46.152 14:01:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:32:46.152 14:01:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:46.152 14:01:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:46.152 14:01:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:46.152 ************************************ 00:32:46.152 START TEST raid_superblock_test 00:32:46.152 ************************************ 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:32:46.152 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83827 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83827 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83827 ']' 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.153 14:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.412 [2024-10-09 14:01:52.782930] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:46.412 [2024-10-09 14:01:52.783416] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83827 ] 00:32:46.412 [2024-10-09 14:01:52.959654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.670 [2024-10-09 14:01:53.003891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.670 [2024-10-09 14:01:53.048515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:46.670 [2024-10-09 14:01:53.048586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.237 malloc1 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.237 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.237 [2024-10-09 14:01:53.737524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:47.237 [2024-10-09 14:01:53.737782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.237 [2024-10-09 14:01:53.737849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:47.237 [2024-10-09 14:01:53.737952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.237 [2024-10-09 14:01:53.740615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.238 [2024-10-09 14:01:53.740798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:47.238 pt1 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.238 malloc2 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.238 [2024-10-09 14:01:53.777607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:47.238 [2024-10-09 14:01:53.777694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.238 [2024-10-09 14:01:53.777721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:47.238 [2024-10-09 14:01:53.777742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.238 [2024-10-09 14:01:53.780385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.238 [2024-10-09 14:01:53.780534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:47.238 pt2 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.238 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 malloc3 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 [2024-10-09 14:01:53.802784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:47.497 [2024-10-09 14:01:53.802942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.497 [2024-10-09 14:01:53.802969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:47.497 [2024-10-09 14:01:53.802984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.497 [2024-10-09 14:01:53.805423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.497 [2024-10-09 14:01:53.805465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:47.497 pt3 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 malloc4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 [2024-10-09 14:01:53.827957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:47.497 [2024-10-09 14:01:53.828008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.497 [2024-10-09 14:01:53.828029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:47.497 [2024-10-09 14:01:53.828045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.497 [2024-10-09 14:01:53.830452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.497 [2024-10-09 14:01:53.830617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:47.497 pt4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 [2024-10-09 14:01:53.840061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:47.497 [2024-10-09 14:01:53.842293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:47.497 [2024-10-09 14:01:53.842465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:47.497 [2024-10-09 14:01:53.842543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:47.497 [2024-10-09 14:01:53.842705] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:32:47.497 [2024-10-09 14:01:53.842720] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:47.497 [2024-10-09 14:01:53.842983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:47.497 [2024-10-09 14:01:53.843121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:32:47.497 [2024-10-09 14:01:53.843131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:32:47.497 [2024-10-09 14:01:53.843245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:47.497 "name": "raid_bdev1", 00:32:47.497 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:47.497 "strip_size_kb": 64, 00:32:47.497 "state": "online", 00:32:47.497 "raid_level": "concat", 00:32:47.497 "superblock": true, 00:32:47.497 "num_base_bdevs": 4, 00:32:47.497 "num_base_bdevs_discovered": 4, 00:32:47.497 "num_base_bdevs_operational": 4, 00:32:47.497 "base_bdevs_list": [ 00:32:47.497 { 00:32:47.497 "name": "pt1", 00:32:47.497 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:47.497 "is_configured": true, 00:32:47.497 "data_offset": 2048, 00:32:47.497 "data_size": 63488 00:32:47.497 }, 00:32:47.497 { 00:32:47.497 "name": "pt2", 00:32:47.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:47.497 "is_configured": true, 00:32:47.497 "data_offset": 2048, 00:32:47.497 "data_size": 63488 00:32:47.497 }, 00:32:47.497 { 00:32:47.497 "name": "pt3", 00:32:47.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:47.497 "is_configured": true, 00:32:47.497 "data_offset": 2048, 00:32:47.497 "data_size": 63488 00:32:47.497 }, 00:32:47.497 { 00:32:47.497 "name": "pt4", 00:32:47.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:47.497 "is_configured": true, 00:32:47.497 "data_offset": 2048, 00:32:47.497 "data_size": 63488 00:32:47.497 } 00:32:47.497 ] 00:32:47.497 }' 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:47.497 14:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:48.064 [2024-10-09 14:01:54.316435] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.064 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.064 "name": "raid_bdev1", 00:32:48.064 "aliases": [ 00:32:48.064 "c654b3f1-8af0-4a55-ba89-e5a5a714d424" 00:32:48.064 ], 00:32:48.064 "product_name": "Raid Volume", 00:32:48.064 "block_size": 512, 00:32:48.064 "num_blocks": 253952, 00:32:48.064 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:48.064 "assigned_rate_limits": { 00:32:48.064 "rw_ios_per_sec": 0, 00:32:48.064 "rw_mbytes_per_sec": 0, 00:32:48.064 "r_mbytes_per_sec": 0, 00:32:48.064 "w_mbytes_per_sec": 0 00:32:48.064 }, 00:32:48.064 "claimed": false, 00:32:48.064 "zoned": false, 00:32:48.064 "supported_io_types": { 00:32:48.065 "read": true, 00:32:48.065 "write": true, 00:32:48.065 "unmap": true, 00:32:48.065 "flush": true, 00:32:48.065 "reset": true, 00:32:48.065 "nvme_admin": false, 00:32:48.065 "nvme_io": false, 00:32:48.065 "nvme_io_md": false, 00:32:48.065 "write_zeroes": true, 00:32:48.065 "zcopy": false, 00:32:48.065 "get_zone_info": false, 00:32:48.065 "zone_management": false, 00:32:48.065 "zone_append": false, 00:32:48.065 "compare": false, 00:32:48.065 "compare_and_write": false, 00:32:48.065 "abort": false, 00:32:48.065 "seek_hole": false, 00:32:48.065 "seek_data": false, 00:32:48.065 "copy": false, 00:32:48.065 "nvme_iov_md": false 00:32:48.065 }, 00:32:48.065 "memory_domains": [ 00:32:48.065 { 00:32:48.065 "dma_device_id": "system", 00:32:48.065 "dma_device_type": 1 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.065 "dma_device_type": 2 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "system", 00:32:48.065 "dma_device_type": 1 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.065 "dma_device_type": 2 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "system", 00:32:48.065 "dma_device_type": 1 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.065 "dma_device_type": 2 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "system", 00:32:48.065 "dma_device_type": 1 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.065 "dma_device_type": 2 00:32:48.065 } 00:32:48.065 ], 00:32:48.065 "driver_specific": { 00:32:48.065 "raid": { 00:32:48.065 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:48.065 "strip_size_kb": 64, 00:32:48.065 "state": "online", 00:32:48.065 "raid_level": "concat", 00:32:48.065 "superblock": true, 00:32:48.065 "num_base_bdevs": 4, 00:32:48.065 "num_base_bdevs_discovered": 4, 00:32:48.065 "num_base_bdevs_operational": 4, 00:32:48.065 "base_bdevs_list": [ 00:32:48.065 { 00:32:48.065 "name": "pt1", 00:32:48.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.065 "is_configured": true, 00:32:48.065 "data_offset": 2048, 00:32:48.065 "data_size": 63488 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "name": "pt2", 00:32:48.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.065 "is_configured": true, 00:32:48.065 "data_offset": 2048, 00:32:48.065 "data_size": 63488 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "name": "pt3", 00:32:48.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.065 "is_configured": true, 00:32:48.065 "data_offset": 2048, 00:32:48.065 "data_size": 63488 00:32:48.065 }, 00:32:48.065 { 00:32:48.065 "name": "pt4", 00:32:48.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.065 "is_configured": true, 00:32:48.065 "data_offset": 2048, 00:32:48.065 "data_size": 63488 00:32:48.065 } 00:32:48.065 ] 00:32:48.065 } 00:32:48.065 } 00:32:48.065 }' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:48.065 pt2 00:32:48.065 pt3 00:32:48.065 pt4' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.065 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:32:48.328 [2024-10-09 14:01:54.640448] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c654b3f1-8af0-4a55-ba89-e5a5a714d424 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c654b3f1-8af0-4a55-ba89-e5a5a714d424 ']' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 [2024-10-09 14:01:54.684138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:48.328 [2024-10-09 14:01:54.684272] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:48.328 [2024-10-09 14:01:54.684348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:48.328 [2024-10-09 14:01:54.684428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:48.328 [2024-10-09 14:01:54.684442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:32:48.328 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.329 [2024-10-09 14:01:54.832223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:48.329 [2024-10-09 14:01:54.834465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:48.329 [2024-10-09 14:01:54.834513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:48.329 [2024-10-09 14:01:54.834543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:32:48.329 [2024-10-09 14:01:54.834602] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:48.329 [2024-10-09 14:01:54.834649] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:48.329 [2024-10-09 14:01:54.834672] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:48.329 [2024-10-09 14:01:54.834692] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:32:48.329 [2024-10-09 14:01:54.834710] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:48.329 [2024-10-09 14:01:54.834720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:32:48.329 request: 00:32:48.329 { 00:32:48.329 "name": "raid_bdev1", 00:32:48.329 "raid_level": "concat", 00:32:48.329 "base_bdevs": [ 00:32:48.329 "malloc1", 00:32:48.329 "malloc2", 00:32:48.329 "malloc3", 00:32:48.329 "malloc4" 00:32:48.329 ], 00:32:48.329 "strip_size_kb": 64, 00:32:48.329 "superblock": false, 00:32:48.329 "method": "bdev_raid_create", 00:32:48.329 "req_id": 1 00:32:48.329 } 00:32:48.329 Got JSON-RPC error response 00:32:48.329 response: 00:32:48.329 { 00:32:48.329 "code": -17, 00:32:48.329 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:48.329 } 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.329 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.588 [2024-10-09 14:01:54.896184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:48.588 [2024-10-09 14:01:54.896237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.588 [2024-10-09 14:01:54.896262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:48.588 [2024-10-09 14:01:54.896273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.588 [2024-10-09 14:01:54.898789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.588 [2024-10-09 14:01:54.898828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:48.588 [2024-10-09 14:01:54.898901] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:48.588 [2024-10-09 14:01:54.898939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:48.588 pt1 00:32:48.588 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:48.589 "name": "raid_bdev1", 00:32:48.589 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:48.589 "strip_size_kb": 64, 00:32:48.589 "state": "configuring", 00:32:48.589 "raid_level": "concat", 00:32:48.589 "superblock": true, 00:32:48.589 "num_base_bdevs": 4, 00:32:48.589 "num_base_bdevs_discovered": 1, 00:32:48.589 "num_base_bdevs_operational": 4, 00:32:48.589 "base_bdevs_list": [ 00:32:48.589 { 00:32:48.589 "name": "pt1", 00:32:48.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:48.589 "is_configured": true, 00:32:48.589 "data_offset": 2048, 00:32:48.589 "data_size": 63488 00:32:48.589 }, 00:32:48.589 { 00:32:48.589 "name": null, 00:32:48.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:48.589 "is_configured": false, 00:32:48.589 "data_offset": 2048, 00:32:48.589 "data_size": 63488 00:32:48.589 }, 00:32:48.589 { 00:32:48.589 "name": null, 00:32:48.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:48.589 "is_configured": false, 00:32:48.589 "data_offset": 2048, 00:32:48.589 "data_size": 63488 00:32:48.589 }, 00:32:48.589 { 00:32:48.589 "name": null, 00:32:48.589 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:48.589 "is_configured": false, 00:32:48.589 "data_offset": 2048, 00:32:48.589 "data_size": 63488 00:32:48.589 } 00:32:48.589 ] 00:32:48.589 }' 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:48.589 14:01:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.848 [2024-10-09 14:01:55.340309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:48.848 [2024-10-09 14:01:55.340377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:48.848 [2024-10-09 14:01:55.340405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:48.848 [2024-10-09 14:01:55.340418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:48.848 [2024-10-09 14:01:55.340877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:48.848 [2024-10-09 14:01:55.340954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:48.848 [2024-10-09 14:01:55.341045] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:48.848 [2024-10-09 14:01:55.341072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:48.848 pt2 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.848 [2024-10-09 14:01:55.348307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.848 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.126 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.126 "name": "raid_bdev1", 00:32:49.126 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:49.126 "strip_size_kb": 64, 00:32:49.126 "state": "configuring", 00:32:49.126 "raid_level": "concat", 00:32:49.126 "superblock": true, 00:32:49.126 "num_base_bdevs": 4, 00:32:49.126 "num_base_bdevs_discovered": 1, 00:32:49.126 "num_base_bdevs_operational": 4, 00:32:49.126 "base_bdevs_list": [ 00:32:49.126 { 00:32:49.126 "name": "pt1", 00:32:49.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.126 "is_configured": true, 00:32:49.126 "data_offset": 2048, 00:32:49.126 "data_size": 63488 00:32:49.126 }, 00:32:49.126 { 00:32:49.126 "name": null, 00:32:49.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.126 "is_configured": false, 00:32:49.126 "data_offset": 0, 00:32:49.126 "data_size": 63488 00:32:49.126 }, 00:32:49.126 { 00:32:49.126 "name": null, 00:32:49.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:49.126 "is_configured": false, 00:32:49.126 "data_offset": 2048, 00:32:49.126 "data_size": 63488 00:32:49.126 }, 00:32:49.126 { 00:32:49.126 "name": null, 00:32:49.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:49.126 "is_configured": false, 00:32:49.126 "data_offset": 2048, 00:32:49.126 "data_size": 63488 00:32:49.126 } 00:32:49.126 ] 00:32:49.126 }' 00:32:49.126 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.126 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 [2024-10-09 14:01:55.804407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:49.387 [2024-10-09 14:01:55.804481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.387 [2024-10-09 14:01:55.804502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:49.387 [2024-10-09 14:01:55.804516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.387 [2024-10-09 14:01:55.804940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.387 [2024-10-09 14:01:55.804963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:49.387 [2024-10-09 14:01:55.805038] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:49.387 [2024-10-09 14:01:55.805063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:49.387 pt2 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 [2024-10-09 14:01:55.816357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:49.387 [2024-10-09 14:01:55.816537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.387 [2024-10-09 14:01:55.816573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:49.387 [2024-10-09 14:01:55.816588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.387 [2024-10-09 14:01:55.816941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.387 [2024-10-09 14:01:55.816971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:49.387 [2024-10-09 14:01:55.817033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:49.387 [2024-10-09 14:01:55.817056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:49.387 pt3 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.387 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.387 [2024-10-09 14:01:55.824366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:32:49.387 [2024-10-09 14:01:55.824425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.388 [2024-10-09 14:01:55.824444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:49.388 [2024-10-09 14:01:55.824457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.388 [2024-10-09 14:01:55.824789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.388 [2024-10-09 14:01:55.824817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:32:49.388 [2024-10-09 14:01:55.824870] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:32:49.388 [2024-10-09 14:01:55.824898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:32:49.388 [2024-10-09 14:01:55.824993] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:32:49.388 [2024-10-09 14:01:55.825009] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:49.388 [2024-10-09 14:01:55.825251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:49.388 [2024-10-09 14:01:55.825368] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:32:49.388 [2024-10-09 14:01:55.825379] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:32:49.388 [2024-10-09 14:01:55.825476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.388 pt4 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.388 "name": "raid_bdev1", 00:32:49.388 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:49.388 "strip_size_kb": 64, 00:32:49.388 "state": "online", 00:32:49.388 "raid_level": "concat", 00:32:49.388 "superblock": true, 00:32:49.388 "num_base_bdevs": 4, 00:32:49.388 "num_base_bdevs_discovered": 4, 00:32:49.388 "num_base_bdevs_operational": 4, 00:32:49.388 "base_bdevs_list": [ 00:32:49.388 { 00:32:49.388 "name": "pt1", 00:32:49.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.388 "is_configured": true, 00:32:49.388 "data_offset": 2048, 00:32:49.388 "data_size": 63488 00:32:49.388 }, 00:32:49.388 { 00:32:49.388 "name": "pt2", 00:32:49.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.388 "is_configured": true, 00:32:49.388 "data_offset": 2048, 00:32:49.388 "data_size": 63488 00:32:49.388 }, 00:32:49.388 { 00:32:49.388 "name": "pt3", 00:32:49.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:49.388 "is_configured": true, 00:32:49.388 "data_offset": 2048, 00:32:49.388 "data_size": 63488 00:32:49.388 }, 00:32:49.388 { 00:32:49.388 "name": "pt4", 00:32:49.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:49.388 "is_configured": true, 00:32:49.388 "data_offset": 2048, 00:32:49.388 "data_size": 63488 00:32:49.388 } 00:32:49.388 ] 00:32:49.388 }' 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.388 14:01:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 [2024-10-09 14:01:56.296839] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.955 "name": "raid_bdev1", 00:32:49.955 "aliases": [ 00:32:49.955 "c654b3f1-8af0-4a55-ba89-e5a5a714d424" 00:32:49.955 ], 00:32:49.955 "product_name": "Raid Volume", 00:32:49.955 "block_size": 512, 00:32:49.955 "num_blocks": 253952, 00:32:49.955 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:49.955 "assigned_rate_limits": { 00:32:49.955 "rw_ios_per_sec": 0, 00:32:49.955 "rw_mbytes_per_sec": 0, 00:32:49.955 "r_mbytes_per_sec": 0, 00:32:49.955 "w_mbytes_per_sec": 0 00:32:49.955 }, 00:32:49.955 "claimed": false, 00:32:49.955 "zoned": false, 00:32:49.955 "supported_io_types": { 00:32:49.955 "read": true, 00:32:49.955 "write": true, 00:32:49.955 "unmap": true, 00:32:49.955 "flush": true, 00:32:49.955 "reset": true, 00:32:49.955 "nvme_admin": false, 00:32:49.955 "nvme_io": false, 00:32:49.955 "nvme_io_md": false, 00:32:49.955 "write_zeroes": true, 00:32:49.955 "zcopy": false, 00:32:49.955 "get_zone_info": false, 00:32:49.955 "zone_management": false, 00:32:49.955 "zone_append": false, 00:32:49.955 "compare": false, 00:32:49.955 "compare_and_write": false, 00:32:49.955 "abort": false, 00:32:49.955 "seek_hole": false, 00:32:49.955 "seek_data": false, 00:32:49.955 "copy": false, 00:32:49.955 "nvme_iov_md": false 00:32:49.955 }, 00:32:49.955 "memory_domains": [ 00:32:49.955 { 00:32:49.955 "dma_device_id": "system", 00:32:49.955 "dma_device_type": 1 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.955 "dma_device_type": 2 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "system", 00:32:49.955 "dma_device_type": 1 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.955 "dma_device_type": 2 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "system", 00:32:49.955 "dma_device_type": 1 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.955 "dma_device_type": 2 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "system", 00:32:49.955 "dma_device_type": 1 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.955 "dma_device_type": 2 00:32:49.955 } 00:32:49.955 ], 00:32:49.955 "driver_specific": { 00:32:49.955 "raid": { 00:32:49.955 "uuid": "c654b3f1-8af0-4a55-ba89-e5a5a714d424", 00:32:49.955 "strip_size_kb": 64, 00:32:49.955 "state": "online", 00:32:49.955 "raid_level": "concat", 00:32:49.955 "superblock": true, 00:32:49.955 "num_base_bdevs": 4, 00:32:49.955 "num_base_bdevs_discovered": 4, 00:32:49.955 "num_base_bdevs_operational": 4, 00:32:49.955 "base_bdevs_list": [ 00:32:49.955 { 00:32:49.955 "name": "pt1", 00:32:49.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:49.955 "is_configured": true, 00:32:49.955 "data_offset": 2048, 00:32:49.955 "data_size": 63488 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "name": "pt2", 00:32:49.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:49.955 "is_configured": true, 00:32:49.955 "data_offset": 2048, 00:32:49.955 "data_size": 63488 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "name": "pt3", 00:32:49.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:49.955 "is_configured": true, 00:32:49.955 "data_offset": 2048, 00:32:49.955 "data_size": 63488 00:32:49.955 }, 00:32:49.955 { 00:32:49.955 "name": "pt4", 00:32:49.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:32:49.955 "is_configured": true, 00:32:49.955 "data_offset": 2048, 00:32:49.955 "data_size": 63488 00:32:49.955 } 00:32:49.955 ] 00:32:49.955 } 00:32:49.955 } 00:32:49.955 }' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:32:49.955 pt2 00:32:49.955 pt3 00:32:49.955 pt4' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:49.955 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.215 [2024-10-09 14:01:56.624865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c654b3f1-8af0-4a55-ba89-e5a5a714d424 '!=' c654b3f1-8af0-4a55-ba89-e5a5a714d424 ']' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83827 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83827 ']' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83827 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83827 00:32:50.215 killing process with pid 83827 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83827' 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83827 00:32:50.215 [2024-10-09 14:01:56.719674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:50.215 [2024-10-09 14:01:56.719757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:50.215 [2024-10-09 14:01:56.719824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:50.215 14:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83827 00:32:50.215 [2024-10-09 14:01:56.719838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:32:50.474 [2024-10-09 14:01:56.766446] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:50.474 14:01:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:32:50.474 00:32:50.474 real 0m4.348s 00:32:50.474 user 0m6.944s 00:32:50.474 sys 0m1.021s 00:32:50.474 14:01:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:50.474 14:01:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.474 ************************************ 00:32:50.474 END TEST raid_superblock_test 00:32:50.474 ************************************ 00:32:50.733 14:01:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:32:50.733 14:01:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:50.733 14:01:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:50.733 14:01:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:50.733 ************************************ 00:32:50.733 START TEST raid_read_error_test 00:32:50.733 ************************************ 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:50.733 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.K2czPoSssY 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84081 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84081 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 84081 ']' 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:50.734 14:01:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:50.734 [2024-10-09 14:01:57.205081] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:50.734 [2024-10-09 14:01:57.205488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84081 ] 00:32:50.993 [2024-10-09 14:01:57.384749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.993 [2024-10-09 14:01:57.430350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.993 [2024-10-09 14:01:57.475604] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:50.993 [2024-10-09 14:01:57.475644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.930 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.930 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 BaseBdev1_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 true 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 [2024-10-09 14:01:58.192384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:51.931 [2024-10-09 14:01:58.192444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.931 [2024-10-09 14:01:58.192468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:51.931 [2024-10-09 14:01:58.192487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.931 [2024-10-09 14:01:58.195225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.931 [2024-10-09 14:01:58.195266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:51.931 BaseBdev1 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 BaseBdev2_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 true 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 [2024-10-09 14:01:58.238587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:51.931 [2024-10-09 14:01:58.238640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.931 [2024-10-09 14:01:58.238664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:51.931 [2024-10-09 14:01:58.238676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.931 [2024-10-09 14:01:58.241235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.931 [2024-10-09 14:01:58.241274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:51.931 BaseBdev2 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 BaseBdev3_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 true 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 [2024-10-09 14:01:58.267877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:51.931 [2024-10-09 14:01:58.267926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.931 [2024-10-09 14:01:58.267949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:51.931 [2024-10-09 14:01:58.267960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.931 [2024-10-09 14:01:58.270447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.931 [2024-10-09 14:01:58.270662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:51.931 BaseBdev3 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 BaseBdev4_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 true 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 [2024-10-09 14:01:58.297100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:51.931 [2024-10-09 14:01:58.297149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:51.931 [2024-10-09 14:01:58.297175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:51.931 [2024-10-09 14:01:58.297187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:51.931 [2024-10-09 14:01:58.299771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:51.931 [2024-10-09 14:01:58.299810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:51.931 BaseBdev4 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 [2024-10-09 14:01:58.305150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:51.931 [2024-10-09 14:01:58.307434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:51.931 [2024-10-09 14:01:58.307521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:51.931 [2024-10-09 14:01:58.307600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:51.931 [2024-10-09 14:01:58.307802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:32:51.931 [2024-10-09 14:01:58.307820] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:51.931 [2024-10-09 14:01:58.308075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:51.931 [2024-10-09 14:01:58.308225] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:32:51.931 [2024-10-09 14:01:58.308239] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:32:51.931 [2024-10-09 14:01:58.308372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.931 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:51.931 "name": "raid_bdev1", 00:32:51.931 "uuid": "1aade243-68b7-488f-bbaa-cf1a8152c17d", 00:32:51.931 "strip_size_kb": 64, 00:32:51.931 "state": "online", 00:32:51.931 "raid_level": "concat", 00:32:51.931 "superblock": true, 00:32:51.931 "num_base_bdevs": 4, 00:32:51.931 "num_base_bdevs_discovered": 4, 00:32:51.931 "num_base_bdevs_operational": 4, 00:32:51.931 "base_bdevs_list": [ 00:32:51.931 { 00:32:51.931 "name": "BaseBdev1", 00:32:51.932 "uuid": "d0978fdf-a958-59dc-82ac-522f944c5ec2", 00:32:51.932 "is_configured": true, 00:32:51.932 "data_offset": 2048, 00:32:51.932 "data_size": 63488 00:32:51.932 }, 00:32:51.932 { 00:32:51.932 "name": "BaseBdev2", 00:32:51.932 "uuid": "3f02fdc3-c7a5-57b9-819c-c03b34e8fea1", 00:32:51.932 "is_configured": true, 00:32:51.932 "data_offset": 2048, 00:32:51.932 "data_size": 63488 00:32:51.932 }, 00:32:51.932 { 00:32:51.932 "name": "BaseBdev3", 00:32:51.932 "uuid": "63a95cc6-93bb-577a-a349-ccdd88ffac7d", 00:32:51.932 "is_configured": true, 00:32:51.932 "data_offset": 2048, 00:32:51.932 "data_size": 63488 00:32:51.932 }, 00:32:51.932 { 00:32:51.932 "name": "BaseBdev4", 00:32:51.932 "uuid": "bb178ba3-dd18-5a5a-8ddf-21509c3d7248", 00:32:51.932 "is_configured": true, 00:32:51.932 "data_offset": 2048, 00:32:51.932 "data_size": 63488 00:32:51.932 } 00:32:51.932 ] 00:32:51.932 }' 00:32:51.932 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:51.932 14:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:52.500 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:52.500 14:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:52.500 [2024-10-09 14:01:58.873690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:53.440 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:53.441 "name": "raid_bdev1", 00:32:53.441 "uuid": "1aade243-68b7-488f-bbaa-cf1a8152c17d", 00:32:53.441 "strip_size_kb": 64, 00:32:53.441 "state": "online", 00:32:53.441 "raid_level": "concat", 00:32:53.441 "superblock": true, 00:32:53.441 "num_base_bdevs": 4, 00:32:53.441 "num_base_bdevs_discovered": 4, 00:32:53.441 "num_base_bdevs_operational": 4, 00:32:53.441 "base_bdevs_list": [ 00:32:53.441 { 00:32:53.441 "name": "BaseBdev1", 00:32:53.441 "uuid": "d0978fdf-a958-59dc-82ac-522f944c5ec2", 00:32:53.441 "is_configured": true, 00:32:53.441 "data_offset": 2048, 00:32:53.441 "data_size": 63488 00:32:53.441 }, 00:32:53.441 { 00:32:53.441 "name": "BaseBdev2", 00:32:53.441 "uuid": "3f02fdc3-c7a5-57b9-819c-c03b34e8fea1", 00:32:53.441 "is_configured": true, 00:32:53.441 "data_offset": 2048, 00:32:53.441 "data_size": 63488 00:32:53.441 }, 00:32:53.441 { 00:32:53.441 "name": "BaseBdev3", 00:32:53.441 "uuid": "63a95cc6-93bb-577a-a349-ccdd88ffac7d", 00:32:53.441 "is_configured": true, 00:32:53.441 "data_offset": 2048, 00:32:53.441 "data_size": 63488 00:32:53.441 }, 00:32:53.441 { 00:32:53.441 "name": "BaseBdev4", 00:32:53.441 "uuid": "bb178ba3-dd18-5a5a-8ddf-21509c3d7248", 00:32:53.441 "is_configured": true, 00:32:53.441 "data_offset": 2048, 00:32:53.441 "data_size": 63488 00:32:53.441 } 00:32:53.441 ] 00:32:53.441 }' 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:53.441 14:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.700 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:53.700 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.700 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:53.700 [2024-10-09 14:02:00.212817] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:53.700 [2024-10-09 14:02:00.212986] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:53.700 [2024-10-09 14:02:00.215845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:53.700 [2024-10-09 14:02:00.216003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.700 [2024-10-09 14:02:00.216065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:53.700 [2024-10-09 14:02:00.216076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:32:53.700 { 00:32:53.700 "results": [ 00:32:53.700 { 00:32:53.700 "job": "raid_bdev1", 00:32:53.700 "core_mask": "0x1", 00:32:53.700 "workload": "randrw", 00:32:53.700 "percentage": 50, 00:32:53.700 "status": "finished", 00:32:53.700 "queue_depth": 1, 00:32:53.700 "io_size": 131072, 00:32:53.700 "runtime": 1.336846, 00:32:53.700 "iops": 16035.504463490934, 00:32:53.700 "mibps": 2004.4380579363667, 00:32:53.700 "io_failed": 1, 00:32:53.700 "io_timeout": 0, 00:32:53.700 "avg_latency_us": 86.2079365967863, 00:32:53.700 "min_latency_us": 26.087619047619047, 00:32:53.700 "max_latency_us": 1497.9657142857143 00:32:53.700 } 00:32:53.700 ], 00:32:53.700 "core_count": 1 00:32:53.700 } 00:32:53.700 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84081 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 84081 ']' 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 84081 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.701 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84081 00:32:53.960 killing process with pid 84081 00:32:53.960 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:53.960 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:53.960 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84081' 00:32:53.960 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 84081 00:32:53.960 [2024-10-09 14:02:00.260633] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.960 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 84081 00:32:53.960 [2024-10-09 14:02:00.296807] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.K2czPoSssY 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:54.220 ************************************ 00:32:54.220 END TEST raid_read_error_test 00:32:54.220 ************************************ 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:32:54.220 00:32:54.220 real 0m3.475s 00:32:54.220 user 0m4.442s 00:32:54.220 sys 0m0.642s 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:54.220 14:02:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.220 14:02:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:32:54.220 14:02:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:54.220 14:02:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:54.220 14:02:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:54.220 ************************************ 00:32:54.220 START TEST raid_write_error_test 00:32:54.220 ************************************ 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:54.220 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lf6qdnJQBk 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84210 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84210 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84210 ']' 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:32:54.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.221 14:02:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:54.221 [2024-10-09 14:02:00.745187] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:54.221 [2024-10-09 14:02:00.746128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84210 ] 00:32:54.480 [2024-10-09 14:02:00.931438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.480 [2024-10-09 14:02:00.983879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.740 [2024-10-09 14:02:01.035298] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:54.740 [2024-10-09 14:02:01.035352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 BaseBdev1_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 true 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 [2024-10-09 14:02:01.698514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:32:55.309 [2024-10-09 14:02:01.698583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.309 [2024-10-09 14:02:01.698614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:32:55.309 [2024-10-09 14:02:01.698645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.309 [2024-10-09 14:02:01.701120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.309 [2024-10-09 14:02:01.701164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:55.309 BaseBdev1 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 BaseBdev2_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 true 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 [2024-10-09 14:02:01.747524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:32:55.309 [2024-10-09 14:02:01.747615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.309 [2024-10-09 14:02:01.747637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:32:55.309 [2024-10-09 14:02:01.747648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.309 [2024-10-09 14:02:01.750054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.309 [2024-10-09 14:02:01.750092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:55.309 BaseBdev2 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 BaseBdev3_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 true 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.309 [2024-10-09 14:02:01.788519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:32:55.309 [2024-10-09 14:02:01.788581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.309 [2024-10-09 14:02:01.788604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:55.309 [2024-10-09 14:02:01.788628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.309 [2024-10-09 14:02:01.791101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.309 [2024-10-09 14:02:01.791138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:55.309 BaseBdev3 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.309 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.310 BaseBdev4_malloc 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.310 true 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.310 [2024-10-09 14:02:01.825833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:32:55.310 [2024-10-09 14:02:01.825881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:55.310 [2024-10-09 14:02:01.825906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:55.310 [2024-10-09 14:02:01.825917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:55.310 [2024-10-09 14:02:01.828308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:55.310 [2024-10-09 14:02:01.828444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:55.310 BaseBdev4 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.310 [2024-10-09 14:02:01.837899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.310 [2024-10-09 14:02:01.840117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:55.310 [2024-10-09 14:02:01.840204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:55.310 [2024-10-09 14:02:01.840257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:55.310 [2024-10-09 14:02:01.840454] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:32:55.310 [2024-10-09 14:02:01.840466] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:32:55.310 [2024-10-09 14:02:01.840757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:55.310 [2024-10-09 14:02:01.840905] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:32:55.310 [2024-10-09 14:02:01.840923] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:32:55.310 [2024-10-09 14:02:01.841037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:55.310 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.569 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:55.569 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:55.569 "name": "raid_bdev1", 00:32:55.569 "uuid": "e141d56b-3b42-46c0-ac62-991c9cb7ecb6", 00:32:55.569 "strip_size_kb": 64, 00:32:55.569 "state": "online", 00:32:55.569 "raid_level": "concat", 00:32:55.569 "superblock": true, 00:32:55.569 "num_base_bdevs": 4, 00:32:55.569 "num_base_bdevs_discovered": 4, 00:32:55.569 "num_base_bdevs_operational": 4, 00:32:55.569 "base_bdevs_list": [ 00:32:55.569 { 00:32:55.569 "name": "BaseBdev1", 00:32:55.569 "uuid": "6c120515-1412-5f2a-85d8-aaa8ef6a0445", 00:32:55.569 "is_configured": true, 00:32:55.569 "data_offset": 2048, 00:32:55.569 "data_size": 63488 00:32:55.569 }, 00:32:55.569 { 00:32:55.569 "name": "BaseBdev2", 00:32:55.569 "uuid": "d8856b8e-1b64-567f-89f3-825954a7fcaa", 00:32:55.569 "is_configured": true, 00:32:55.569 "data_offset": 2048, 00:32:55.569 "data_size": 63488 00:32:55.569 }, 00:32:55.569 { 00:32:55.569 "name": "BaseBdev3", 00:32:55.569 "uuid": "a6f6012d-b433-5905-b798-fd3dfef470c2", 00:32:55.569 "is_configured": true, 00:32:55.569 "data_offset": 2048, 00:32:55.569 "data_size": 63488 00:32:55.569 }, 00:32:55.569 { 00:32:55.569 "name": "BaseBdev4", 00:32:55.569 "uuid": "111755f7-c061-5f19-9508-75178dc6a976", 00:32:55.569 "is_configured": true, 00:32:55.569 "data_offset": 2048, 00:32:55.569 "data_size": 63488 00:32:55.569 } 00:32:55.569 ] 00:32:55.569 }' 00:32:55.569 14:02:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:55.569 14:02:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.828 14:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:32:55.828 14:02:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:56.119 [2024-10-09 14:02:02.386804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:56.722 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:32:56.722 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.722 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:56.982 "name": "raid_bdev1", 00:32:56.982 "uuid": "e141d56b-3b42-46c0-ac62-991c9cb7ecb6", 00:32:56.982 "strip_size_kb": 64, 00:32:56.982 "state": "online", 00:32:56.982 "raid_level": "concat", 00:32:56.982 "superblock": true, 00:32:56.982 "num_base_bdevs": 4, 00:32:56.982 "num_base_bdevs_discovered": 4, 00:32:56.982 "num_base_bdevs_operational": 4, 00:32:56.982 "base_bdevs_list": [ 00:32:56.982 { 00:32:56.982 "name": "BaseBdev1", 00:32:56.982 "uuid": "6c120515-1412-5f2a-85d8-aaa8ef6a0445", 00:32:56.982 "is_configured": true, 00:32:56.982 "data_offset": 2048, 00:32:56.982 "data_size": 63488 00:32:56.982 }, 00:32:56.982 { 00:32:56.982 "name": "BaseBdev2", 00:32:56.982 "uuid": "d8856b8e-1b64-567f-89f3-825954a7fcaa", 00:32:56.982 "is_configured": true, 00:32:56.982 "data_offset": 2048, 00:32:56.982 "data_size": 63488 00:32:56.982 }, 00:32:56.982 { 00:32:56.982 "name": "BaseBdev3", 00:32:56.982 "uuid": "a6f6012d-b433-5905-b798-fd3dfef470c2", 00:32:56.982 "is_configured": true, 00:32:56.982 "data_offset": 2048, 00:32:56.982 "data_size": 63488 00:32:56.982 }, 00:32:56.982 { 00:32:56.982 "name": "BaseBdev4", 00:32:56.982 "uuid": "111755f7-c061-5f19-9508-75178dc6a976", 00:32:56.982 "is_configured": true, 00:32:56.982 "data_offset": 2048, 00:32:56.982 "data_size": 63488 00:32:56.982 } 00:32:56.982 ] 00:32:56.982 }' 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:56.982 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.242 [2024-10-09 14:02:03.718723] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:57.242 [2024-10-09 14:02:03.719045] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:57.242 [2024-10-09 14:02:03.721764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:57.242 [2024-10-09 14:02:03.721836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.242 [2024-10-09 14:02:03.721898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:57.242 [2024-10-09 14:02:03.721912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:32:57.242 { 00:32:57.242 "results": [ 00:32:57.242 { 00:32:57.242 "job": "raid_bdev1", 00:32:57.242 "core_mask": "0x1", 00:32:57.242 "workload": "randrw", 00:32:57.242 "percentage": 50, 00:32:57.242 "status": "finished", 00:32:57.242 "queue_depth": 1, 00:32:57.242 "io_size": 131072, 00:32:57.242 "runtime": 1.329288, 00:32:57.242 "iops": 13527.5425641396, 00:32:57.242 "mibps": 1690.94282051745, 00:32:57.242 "io_failed": 1, 00:32:57.242 "io_timeout": 0, 00:32:57.242 "avg_latency_us": 103.81104270435306, 00:32:57.242 "min_latency_us": 26.087619047619047, 00:32:57.242 "max_latency_us": 1404.3428571428572 00:32:57.242 } 00:32:57.242 ], 00:32:57.242 "core_count": 1 00:32:57.242 } 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84210 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84210 ']' 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84210 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84210 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84210' 00:32:57.242 killing process with pid 84210 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84210 00:32:57.242 [2024-10-09 14:02:03.764866] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:57.242 14:02:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84210 00:32:57.501 [2024-10-09 14:02:03.827668] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lf6qdnJQBk 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:32:57.760 00:32:57.760 real 0m3.600s 00:32:57.760 user 0m4.486s 00:32:57.760 sys 0m0.630s 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.760 ************************************ 00:32:57.760 END TEST raid_write_error_test 00:32:57.760 ************************************ 00:32:57.760 14:02:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:32:57.760 14:02:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:32:57.760 14:02:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:32:57.760 14:02:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:57.760 14:02:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.760 14:02:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:57.760 ************************************ 00:32:57.760 START TEST raid_state_function_test 00:32:57.760 ************************************ 00:32:57.760 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:32:57.760 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:32:57.760 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:32:57.761 Process raid pid: 84348 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84348 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84348' 00:32:57.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84348 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84348 ']' 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.761 14:02:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.020 [2024-10-09 14:02:04.393540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:58.020 [2024-10-09 14:02:04.394014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.278 [2024-10-09 14:02:04.572268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.278 [2024-10-09 14:02:04.654224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.278 [2024-10-09 14:02:04.735043] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.278 [2024-10-09 14:02:04.735349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.846 [2024-10-09 14:02:05.349215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:58.846 [2024-10-09 14:02:05.349537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:58.846 [2024-10-09 14:02:05.349584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:58.846 [2024-10-09 14:02:05.349603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:58.846 [2024-10-09 14:02:05.349618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:58.846 [2024-10-09 14:02:05.349650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:58.846 [2024-10-09 14:02:05.349660] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:58.846 [2024-10-09 14:02:05.349676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:58.846 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.104 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.104 "name": "Existed_Raid", 00:32:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.104 "strip_size_kb": 0, 00:32:59.104 "state": "configuring", 00:32:59.104 "raid_level": "raid1", 00:32:59.104 "superblock": false, 00:32:59.104 "num_base_bdevs": 4, 00:32:59.104 "num_base_bdevs_discovered": 0, 00:32:59.104 "num_base_bdevs_operational": 4, 00:32:59.104 "base_bdevs_list": [ 00:32:59.104 { 00:32:59.104 "name": "BaseBdev1", 00:32:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.104 "is_configured": false, 00:32:59.104 "data_offset": 0, 00:32:59.104 "data_size": 0 00:32:59.104 }, 00:32:59.104 { 00:32:59.104 "name": "BaseBdev2", 00:32:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.104 "is_configured": false, 00:32:59.104 "data_offset": 0, 00:32:59.104 "data_size": 0 00:32:59.104 }, 00:32:59.104 { 00:32:59.104 "name": "BaseBdev3", 00:32:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.104 "is_configured": false, 00:32:59.104 "data_offset": 0, 00:32:59.104 "data_size": 0 00:32:59.104 }, 00:32:59.104 { 00:32:59.104 "name": "BaseBdev4", 00:32:59.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.104 "is_configured": false, 00:32:59.104 "data_offset": 0, 00:32:59.104 "data_size": 0 00:32:59.104 } 00:32:59.104 ] 00:32:59.104 }' 00:32:59.104 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.104 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 [2024-10-09 14:02:05.801184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:59.364 [2024-10-09 14:02:05.801512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 [2024-10-09 14:02:05.813217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:59.364 [2024-10-09 14:02:05.813411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:59.364 [2024-10-09 14:02:05.813510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:59.364 [2024-10-09 14:02:05.813575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:59.364 [2024-10-09 14:02:05.813615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:59.364 [2024-10-09 14:02:05.813668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:59.364 [2024-10-09 14:02:05.813764] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:59.364 [2024-10-09 14:02:05.813814] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 [2024-10-09 14:02:05.837381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:59.364 BaseBdev1 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.364 [ 00:32:59.364 { 00:32:59.364 "name": "BaseBdev1", 00:32:59.364 "aliases": [ 00:32:59.364 "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62" 00:32:59.364 ], 00:32:59.364 "product_name": "Malloc disk", 00:32:59.364 "block_size": 512, 00:32:59.364 "num_blocks": 65536, 00:32:59.364 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:32:59.364 "assigned_rate_limits": { 00:32:59.364 "rw_ios_per_sec": 0, 00:32:59.364 "rw_mbytes_per_sec": 0, 00:32:59.364 "r_mbytes_per_sec": 0, 00:32:59.364 "w_mbytes_per_sec": 0 00:32:59.364 }, 00:32:59.364 "claimed": true, 00:32:59.364 "claim_type": "exclusive_write", 00:32:59.364 "zoned": false, 00:32:59.364 "supported_io_types": { 00:32:59.364 "read": true, 00:32:59.364 "write": true, 00:32:59.364 "unmap": true, 00:32:59.364 "flush": true, 00:32:59.364 "reset": true, 00:32:59.364 "nvme_admin": false, 00:32:59.364 "nvme_io": false, 00:32:59.364 "nvme_io_md": false, 00:32:59.364 "write_zeroes": true, 00:32:59.364 "zcopy": true, 00:32:59.364 "get_zone_info": false, 00:32:59.364 "zone_management": false, 00:32:59.364 "zone_append": false, 00:32:59.364 "compare": false, 00:32:59.364 "compare_and_write": false, 00:32:59.364 "abort": true, 00:32:59.364 "seek_hole": false, 00:32:59.364 "seek_data": false, 00:32:59.364 "copy": true, 00:32:59.364 "nvme_iov_md": false 00:32:59.364 }, 00:32:59.364 "memory_domains": [ 00:32:59.364 { 00:32:59.364 "dma_device_id": "system", 00:32:59.364 "dma_device_type": 1 00:32:59.364 }, 00:32:59.364 { 00:32:59.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.364 "dma_device_type": 2 00:32:59.364 } 00:32:59.364 ], 00:32:59.364 "driver_specific": {} 00:32:59.364 } 00:32:59.364 ] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.364 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.622 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.622 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.622 "name": "Existed_Raid", 00:32:59.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.622 "strip_size_kb": 0, 00:32:59.622 "state": "configuring", 00:32:59.622 "raid_level": "raid1", 00:32:59.622 "superblock": false, 00:32:59.622 "num_base_bdevs": 4, 00:32:59.622 "num_base_bdevs_discovered": 1, 00:32:59.622 "num_base_bdevs_operational": 4, 00:32:59.622 "base_bdevs_list": [ 00:32:59.622 { 00:32:59.622 "name": "BaseBdev1", 00:32:59.622 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:32:59.622 "is_configured": true, 00:32:59.622 "data_offset": 0, 00:32:59.622 "data_size": 65536 00:32:59.622 }, 00:32:59.622 { 00:32:59.622 "name": "BaseBdev2", 00:32:59.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.622 "is_configured": false, 00:32:59.622 "data_offset": 0, 00:32:59.622 "data_size": 0 00:32:59.622 }, 00:32:59.622 { 00:32:59.622 "name": "BaseBdev3", 00:32:59.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.622 "is_configured": false, 00:32:59.622 "data_offset": 0, 00:32:59.622 "data_size": 0 00:32:59.622 }, 00:32:59.622 { 00:32:59.622 "name": "BaseBdev4", 00:32:59.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.622 "is_configured": false, 00:32:59.622 "data_offset": 0, 00:32:59.622 "data_size": 0 00:32:59.622 } 00:32:59.622 ] 00:32:59.622 }' 00:32:59.622 14:02:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.622 14:02:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.881 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.882 [2024-10-09 14:02:06.305656] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:59.882 [2024-10-09 14:02:06.305759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.882 [2024-10-09 14:02:06.313598] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:59.882 [2024-10-09 14:02:06.316309] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:59.882 [2024-10-09 14:02:06.316367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:59.882 [2024-10-09 14:02:06.316381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:59.882 [2024-10-09 14:02:06.316397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:59.882 [2024-10-09 14:02:06.316407] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:59.882 [2024-10-09 14:02:06.316422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:59.882 "name": "Existed_Raid", 00:32:59.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.882 "strip_size_kb": 0, 00:32:59.882 "state": "configuring", 00:32:59.882 "raid_level": "raid1", 00:32:59.882 "superblock": false, 00:32:59.882 "num_base_bdevs": 4, 00:32:59.882 "num_base_bdevs_discovered": 1, 00:32:59.882 "num_base_bdevs_operational": 4, 00:32:59.882 "base_bdevs_list": [ 00:32:59.882 { 00:32:59.882 "name": "BaseBdev1", 00:32:59.882 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:32:59.882 "is_configured": true, 00:32:59.882 "data_offset": 0, 00:32:59.882 "data_size": 65536 00:32:59.882 }, 00:32:59.882 { 00:32:59.882 "name": "BaseBdev2", 00:32:59.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.882 "is_configured": false, 00:32:59.882 "data_offset": 0, 00:32:59.882 "data_size": 0 00:32:59.882 }, 00:32:59.882 { 00:32:59.882 "name": "BaseBdev3", 00:32:59.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.882 "is_configured": false, 00:32:59.882 "data_offset": 0, 00:32:59.882 "data_size": 0 00:32:59.882 }, 00:32:59.882 { 00:32:59.882 "name": "BaseBdev4", 00:32:59.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.882 "is_configured": false, 00:32:59.882 "data_offset": 0, 00:32:59.882 "data_size": 0 00:32:59.882 } 00:32:59.882 ] 00:32:59.882 }' 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:59.882 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.449 [2024-10-09 14:02:06.798337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:00.449 BaseBdev2 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.449 [ 00:33:00.449 { 00:33:00.449 "name": "BaseBdev2", 00:33:00.449 "aliases": [ 00:33:00.449 "355af184-fabd-4b20-bfd7-575fdf220a22" 00:33:00.449 ], 00:33:00.449 "product_name": "Malloc disk", 00:33:00.449 "block_size": 512, 00:33:00.449 "num_blocks": 65536, 00:33:00.449 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:00.449 "assigned_rate_limits": { 00:33:00.449 "rw_ios_per_sec": 0, 00:33:00.449 "rw_mbytes_per_sec": 0, 00:33:00.449 "r_mbytes_per_sec": 0, 00:33:00.449 "w_mbytes_per_sec": 0 00:33:00.449 }, 00:33:00.449 "claimed": true, 00:33:00.449 "claim_type": "exclusive_write", 00:33:00.449 "zoned": false, 00:33:00.449 "supported_io_types": { 00:33:00.449 "read": true, 00:33:00.449 "write": true, 00:33:00.449 "unmap": true, 00:33:00.449 "flush": true, 00:33:00.449 "reset": true, 00:33:00.449 "nvme_admin": false, 00:33:00.449 "nvme_io": false, 00:33:00.449 "nvme_io_md": false, 00:33:00.449 "write_zeroes": true, 00:33:00.449 "zcopy": true, 00:33:00.449 "get_zone_info": false, 00:33:00.449 "zone_management": false, 00:33:00.449 "zone_append": false, 00:33:00.449 "compare": false, 00:33:00.449 "compare_and_write": false, 00:33:00.449 "abort": true, 00:33:00.449 "seek_hole": false, 00:33:00.449 "seek_data": false, 00:33:00.449 "copy": true, 00:33:00.449 "nvme_iov_md": false 00:33:00.449 }, 00:33:00.449 "memory_domains": [ 00:33:00.449 { 00:33:00.449 "dma_device_id": "system", 00:33:00.449 "dma_device_type": 1 00:33:00.449 }, 00:33:00.449 { 00:33:00.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.449 "dma_device_type": 2 00:33:00.449 } 00:33:00.449 ], 00:33:00.449 "driver_specific": {} 00:33:00.449 } 00:33:00.449 ] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:00.449 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:00.449 "name": "Existed_Raid", 00:33:00.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.450 "strip_size_kb": 0, 00:33:00.450 "state": "configuring", 00:33:00.450 "raid_level": "raid1", 00:33:00.450 "superblock": false, 00:33:00.450 "num_base_bdevs": 4, 00:33:00.450 "num_base_bdevs_discovered": 2, 00:33:00.450 "num_base_bdevs_operational": 4, 00:33:00.450 "base_bdevs_list": [ 00:33:00.450 { 00:33:00.450 "name": "BaseBdev1", 00:33:00.450 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:33:00.450 "is_configured": true, 00:33:00.450 "data_offset": 0, 00:33:00.450 "data_size": 65536 00:33:00.450 }, 00:33:00.450 { 00:33:00.450 "name": "BaseBdev2", 00:33:00.450 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:00.450 "is_configured": true, 00:33:00.450 "data_offset": 0, 00:33:00.450 "data_size": 65536 00:33:00.450 }, 00:33:00.450 { 00:33:00.450 "name": "BaseBdev3", 00:33:00.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.450 "is_configured": false, 00:33:00.450 "data_offset": 0, 00:33:00.450 "data_size": 0 00:33:00.450 }, 00:33:00.450 { 00:33:00.450 "name": "BaseBdev4", 00:33:00.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.450 "is_configured": false, 00:33:00.450 "data_offset": 0, 00:33:00.450 "data_size": 0 00:33:00.450 } 00:33:00.450 ] 00:33:00.450 }' 00:33:00.450 14:02:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:00.450 14:02:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 [2024-10-09 14:02:07.324061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:01.017 BaseBdev3 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 [ 00:33:01.017 { 00:33:01.017 "name": "BaseBdev3", 00:33:01.017 "aliases": [ 00:33:01.017 "7c4fe77a-6538-4a13-9888-5c7222405403" 00:33:01.017 ], 00:33:01.017 "product_name": "Malloc disk", 00:33:01.017 "block_size": 512, 00:33:01.017 "num_blocks": 65536, 00:33:01.017 "uuid": "7c4fe77a-6538-4a13-9888-5c7222405403", 00:33:01.017 "assigned_rate_limits": { 00:33:01.017 "rw_ios_per_sec": 0, 00:33:01.017 "rw_mbytes_per_sec": 0, 00:33:01.017 "r_mbytes_per_sec": 0, 00:33:01.017 "w_mbytes_per_sec": 0 00:33:01.017 }, 00:33:01.017 "claimed": true, 00:33:01.017 "claim_type": "exclusive_write", 00:33:01.017 "zoned": false, 00:33:01.017 "supported_io_types": { 00:33:01.017 "read": true, 00:33:01.017 "write": true, 00:33:01.017 "unmap": true, 00:33:01.017 "flush": true, 00:33:01.017 "reset": true, 00:33:01.017 "nvme_admin": false, 00:33:01.017 "nvme_io": false, 00:33:01.017 "nvme_io_md": false, 00:33:01.017 "write_zeroes": true, 00:33:01.017 "zcopy": true, 00:33:01.017 "get_zone_info": false, 00:33:01.017 "zone_management": false, 00:33:01.017 "zone_append": false, 00:33:01.017 "compare": false, 00:33:01.017 "compare_and_write": false, 00:33:01.017 "abort": true, 00:33:01.017 "seek_hole": false, 00:33:01.017 "seek_data": false, 00:33:01.017 "copy": true, 00:33:01.017 "nvme_iov_md": false 00:33:01.017 }, 00:33:01.017 "memory_domains": [ 00:33:01.017 { 00:33:01.017 "dma_device_id": "system", 00:33:01.017 "dma_device_type": 1 00:33:01.017 }, 00:33:01.017 { 00:33:01.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.017 "dma_device_type": 2 00:33:01.017 } 00:33:01.017 ], 00:33:01.017 "driver_specific": {} 00:33:01.017 } 00:33:01.017 ] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.017 "name": "Existed_Raid", 00:33:01.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.017 "strip_size_kb": 0, 00:33:01.017 "state": "configuring", 00:33:01.017 "raid_level": "raid1", 00:33:01.017 "superblock": false, 00:33:01.017 "num_base_bdevs": 4, 00:33:01.017 "num_base_bdevs_discovered": 3, 00:33:01.017 "num_base_bdevs_operational": 4, 00:33:01.017 "base_bdevs_list": [ 00:33:01.017 { 00:33:01.017 "name": "BaseBdev1", 00:33:01.017 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:33:01.017 "is_configured": true, 00:33:01.017 "data_offset": 0, 00:33:01.017 "data_size": 65536 00:33:01.017 }, 00:33:01.017 { 00:33:01.017 "name": "BaseBdev2", 00:33:01.017 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:01.017 "is_configured": true, 00:33:01.017 "data_offset": 0, 00:33:01.017 "data_size": 65536 00:33:01.017 }, 00:33:01.017 { 00:33:01.017 "name": "BaseBdev3", 00:33:01.017 "uuid": "7c4fe77a-6538-4a13-9888-5c7222405403", 00:33:01.017 "is_configured": true, 00:33:01.017 "data_offset": 0, 00:33:01.017 "data_size": 65536 00:33:01.017 }, 00:33:01.017 { 00:33:01.017 "name": "BaseBdev4", 00:33:01.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.017 "is_configured": false, 00:33:01.017 "data_offset": 0, 00:33:01.017 "data_size": 0 00:33:01.017 } 00:33:01.017 ] 00:33:01.017 }' 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.017 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.276 [2024-10-09 14:02:07.813721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:01.276 [2024-10-09 14:02:07.814023] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:33:01.276 [2024-10-09 14:02:07.814054] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:01.276 [2024-10-09 14:02:07.814462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:01.276 [2024-10-09 14:02:07.814686] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:33:01.276 [2024-10-09 14:02:07.814714] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:33:01.276 [2024-10-09 14:02:07.814992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.276 BaseBdev4 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.276 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.535 [ 00:33:01.535 { 00:33:01.535 "name": "BaseBdev4", 00:33:01.535 "aliases": [ 00:33:01.535 "b8046d03-3721-4e71-89bc-f4dcfbb3d47c" 00:33:01.535 ], 00:33:01.535 "product_name": "Malloc disk", 00:33:01.535 "block_size": 512, 00:33:01.535 "num_blocks": 65536, 00:33:01.535 "uuid": "b8046d03-3721-4e71-89bc-f4dcfbb3d47c", 00:33:01.535 "assigned_rate_limits": { 00:33:01.535 "rw_ios_per_sec": 0, 00:33:01.535 "rw_mbytes_per_sec": 0, 00:33:01.535 "r_mbytes_per_sec": 0, 00:33:01.535 "w_mbytes_per_sec": 0 00:33:01.535 }, 00:33:01.535 "claimed": true, 00:33:01.535 "claim_type": "exclusive_write", 00:33:01.535 "zoned": false, 00:33:01.535 "supported_io_types": { 00:33:01.535 "read": true, 00:33:01.535 "write": true, 00:33:01.535 "unmap": true, 00:33:01.535 "flush": true, 00:33:01.535 "reset": true, 00:33:01.535 "nvme_admin": false, 00:33:01.535 "nvme_io": false, 00:33:01.535 "nvme_io_md": false, 00:33:01.535 "write_zeroes": true, 00:33:01.535 "zcopy": true, 00:33:01.535 "get_zone_info": false, 00:33:01.535 "zone_management": false, 00:33:01.535 "zone_append": false, 00:33:01.535 "compare": false, 00:33:01.535 "compare_and_write": false, 00:33:01.535 "abort": true, 00:33:01.535 "seek_hole": false, 00:33:01.535 "seek_data": false, 00:33:01.535 "copy": true, 00:33:01.535 "nvme_iov_md": false 00:33:01.535 }, 00:33:01.535 "memory_domains": [ 00:33:01.535 { 00:33:01.535 "dma_device_id": "system", 00:33:01.535 "dma_device_type": 1 00:33:01.535 }, 00:33:01.535 { 00:33:01.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.535 "dma_device_type": 2 00:33:01.535 } 00:33:01.535 ], 00:33:01.535 "driver_specific": {} 00:33:01.535 } 00:33:01.535 ] 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:01.535 "name": "Existed_Raid", 00:33:01.535 "uuid": "53b25602-3be2-4010-ba54-2722821e8cca", 00:33:01.535 "strip_size_kb": 0, 00:33:01.535 "state": "online", 00:33:01.535 "raid_level": "raid1", 00:33:01.535 "superblock": false, 00:33:01.535 "num_base_bdevs": 4, 00:33:01.535 "num_base_bdevs_discovered": 4, 00:33:01.535 "num_base_bdevs_operational": 4, 00:33:01.535 "base_bdevs_list": [ 00:33:01.535 { 00:33:01.535 "name": "BaseBdev1", 00:33:01.535 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:33:01.535 "is_configured": true, 00:33:01.535 "data_offset": 0, 00:33:01.535 "data_size": 65536 00:33:01.535 }, 00:33:01.535 { 00:33:01.535 "name": "BaseBdev2", 00:33:01.535 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:01.535 "is_configured": true, 00:33:01.535 "data_offset": 0, 00:33:01.535 "data_size": 65536 00:33:01.535 }, 00:33:01.535 { 00:33:01.535 "name": "BaseBdev3", 00:33:01.535 "uuid": "7c4fe77a-6538-4a13-9888-5c7222405403", 00:33:01.535 "is_configured": true, 00:33:01.535 "data_offset": 0, 00:33:01.535 "data_size": 65536 00:33:01.535 }, 00:33:01.535 { 00:33:01.535 "name": "BaseBdev4", 00:33:01.535 "uuid": "b8046d03-3721-4e71-89bc-f4dcfbb3d47c", 00:33:01.535 "is_configured": true, 00:33:01.535 "data_offset": 0, 00:33:01.535 "data_size": 65536 00:33:01.535 } 00:33:01.535 ] 00:33:01.535 }' 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:01.535 14:02:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:01.794 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:01.794 [2024-10-09 14:02:08.322383] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.053 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.053 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:02.053 "name": "Existed_Raid", 00:33:02.053 "aliases": [ 00:33:02.053 "53b25602-3be2-4010-ba54-2722821e8cca" 00:33:02.053 ], 00:33:02.053 "product_name": "Raid Volume", 00:33:02.053 "block_size": 512, 00:33:02.053 "num_blocks": 65536, 00:33:02.053 "uuid": "53b25602-3be2-4010-ba54-2722821e8cca", 00:33:02.053 "assigned_rate_limits": { 00:33:02.053 "rw_ios_per_sec": 0, 00:33:02.053 "rw_mbytes_per_sec": 0, 00:33:02.053 "r_mbytes_per_sec": 0, 00:33:02.053 "w_mbytes_per_sec": 0 00:33:02.053 }, 00:33:02.053 "claimed": false, 00:33:02.053 "zoned": false, 00:33:02.053 "supported_io_types": { 00:33:02.053 "read": true, 00:33:02.053 "write": true, 00:33:02.053 "unmap": false, 00:33:02.053 "flush": false, 00:33:02.053 "reset": true, 00:33:02.053 "nvme_admin": false, 00:33:02.053 "nvme_io": false, 00:33:02.053 "nvme_io_md": false, 00:33:02.053 "write_zeroes": true, 00:33:02.053 "zcopy": false, 00:33:02.053 "get_zone_info": false, 00:33:02.053 "zone_management": false, 00:33:02.053 "zone_append": false, 00:33:02.053 "compare": false, 00:33:02.053 "compare_and_write": false, 00:33:02.053 "abort": false, 00:33:02.053 "seek_hole": false, 00:33:02.053 "seek_data": false, 00:33:02.053 "copy": false, 00:33:02.053 "nvme_iov_md": false 00:33:02.053 }, 00:33:02.053 "memory_domains": [ 00:33:02.053 { 00:33:02.053 "dma_device_id": "system", 00:33:02.053 "dma_device_type": 1 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.053 "dma_device_type": 2 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "system", 00:33:02.053 "dma_device_type": 1 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.053 "dma_device_type": 2 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "system", 00:33:02.053 "dma_device_type": 1 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.053 "dma_device_type": 2 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "system", 00:33:02.053 "dma_device_type": 1 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.053 "dma_device_type": 2 00:33:02.053 } 00:33:02.053 ], 00:33:02.053 "driver_specific": { 00:33:02.053 "raid": { 00:33:02.053 "uuid": "53b25602-3be2-4010-ba54-2722821e8cca", 00:33:02.053 "strip_size_kb": 0, 00:33:02.053 "state": "online", 00:33:02.053 "raid_level": "raid1", 00:33:02.053 "superblock": false, 00:33:02.053 "num_base_bdevs": 4, 00:33:02.053 "num_base_bdevs_discovered": 4, 00:33:02.053 "num_base_bdevs_operational": 4, 00:33:02.053 "base_bdevs_list": [ 00:33:02.053 { 00:33:02.053 "name": "BaseBdev1", 00:33:02.053 "uuid": "a1dcffb0-4cd1-43d0-9b0e-dc191fbffd62", 00:33:02.053 "is_configured": true, 00:33:02.053 "data_offset": 0, 00:33:02.053 "data_size": 65536 00:33:02.053 }, 00:33:02.053 { 00:33:02.053 "name": "BaseBdev2", 00:33:02.053 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:02.054 "is_configured": true, 00:33:02.054 "data_offset": 0, 00:33:02.054 "data_size": 65536 00:33:02.054 }, 00:33:02.054 { 00:33:02.054 "name": "BaseBdev3", 00:33:02.054 "uuid": "7c4fe77a-6538-4a13-9888-5c7222405403", 00:33:02.054 "is_configured": true, 00:33:02.054 "data_offset": 0, 00:33:02.054 "data_size": 65536 00:33:02.054 }, 00:33:02.054 { 00:33:02.054 "name": "BaseBdev4", 00:33:02.054 "uuid": "b8046d03-3721-4e71-89bc-f4dcfbb3d47c", 00:33:02.054 "is_configured": true, 00:33:02.054 "data_offset": 0, 00:33:02.054 "data_size": 65536 00:33:02.054 } 00:33:02.054 ] 00:33:02.054 } 00:33:02.054 } 00:33:02.054 }' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:02.054 BaseBdev2 00:33:02.054 BaseBdev3 00:33:02.054 BaseBdev4' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.054 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.313 [2024-10-09 14:02:08.654090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:02.313 "name": "Existed_Raid", 00:33:02.313 "uuid": "53b25602-3be2-4010-ba54-2722821e8cca", 00:33:02.313 "strip_size_kb": 0, 00:33:02.313 "state": "online", 00:33:02.313 "raid_level": "raid1", 00:33:02.313 "superblock": false, 00:33:02.313 "num_base_bdevs": 4, 00:33:02.313 "num_base_bdevs_discovered": 3, 00:33:02.313 "num_base_bdevs_operational": 3, 00:33:02.313 "base_bdevs_list": [ 00:33:02.313 { 00:33:02.313 "name": null, 00:33:02.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.313 "is_configured": false, 00:33:02.313 "data_offset": 0, 00:33:02.313 "data_size": 65536 00:33:02.313 }, 00:33:02.313 { 00:33:02.313 "name": "BaseBdev2", 00:33:02.313 "uuid": "355af184-fabd-4b20-bfd7-575fdf220a22", 00:33:02.313 "is_configured": true, 00:33:02.313 "data_offset": 0, 00:33:02.313 "data_size": 65536 00:33:02.313 }, 00:33:02.313 { 00:33:02.313 "name": "BaseBdev3", 00:33:02.313 "uuid": "7c4fe77a-6538-4a13-9888-5c7222405403", 00:33:02.313 "is_configured": true, 00:33:02.313 "data_offset": 0, 00:33:02.313 "data_size": 65536 00:33:02.313 }, 00:33:02.313 { 00:33:02.313 "name": "BaseBdev4", 00:33:02.313 "uuid": "b8046d03-3721-4e71-89bc-f4dcfbb3d47c", 00:33:02.313 "is_configured": true, 00:33:02.313 "data_offset": 0, 00:33:02.313 "data_size": 65536 00:33:02.313 } 00:33:02.313 ] 00:33:02.313 }' 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:02.313 14:02:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 [2024-10-09 14:02:09.175523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 [2024-10-09 14:02:09.249728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 [2024-10-09 14:02:09.328188] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:02.881 [2024-10-09 14:02:09.328581] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:02.881 [2024-10-09 14:02:09.351702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:02.881 [2024-10-09 14:02:09.351983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:02.881 [2024-10-09 14:02:09.352196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:02.881 BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:02.881 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:03.140 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 [ 00:33:03.141 { 00:33:03.141 "name": "BaseBdev2", 00:33:03.141 "aliases": [ 00:33:03.141 "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5" 00:33:03.141 ], 00:33:03.141 "product_name": "Malloc disk", 00:33:03.141 "block_size": 512, 00:33:03.141 "num_blocks": 65536, 00:33:03.141 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:03.141 "assigned_rate_limits": { 00:33:03.141 "rw_ios_per_sec": 0, 00:33:03.141 "rw_mbytes_per_sec": 0, 00:33:03.141 "r_mbytes_per_sec": 0, 00:33:03.141 "w_mbytes_per_sec": 0 00:33:03.141 }, 00:33:03.141 "claimed": false, 00:33:03.141 "zoned": false, 00:33:03.141 "supported_io_types": { 00:33:03.141 "read": true, 00:33:03.141 "write": true, 00:33:03.141 "unmap": true, 00:33:03.141 "flush": true, 00:33:03.141 "reset": true, 00:33:03.141 "nvme_admin": false, 00:33:03.141 "nvme_io": false, 00:33:03.141 "nvme_io_md": false, 00:33:03.141 "write_zeroes": true, 00:33:03.141 "zcopy": true, 00:33:03.141 "get_zone_info": false, 00:33:03.141 "zone_management": false, 00:33:03.141 "zone_append": false, 00:33:03.141 "compare": false, 00:33:03.141 "compare_and_write": false, 00:33:03.141 "abort": true, 00:33:03.141 "seek_hole": false, 00:33:03.141 "seek_data": false, 00:33:03.141 "copy": true, 00:33:03.141 "nvme_iov_md": false 00:33:03.141 }, 00:33:03.141 "memory_domains": [ 00:33:03.141 { 00:33:03.141 "dma_device_id": "system", 00:33:03.141 "dma_device_type": 1 00:33:03.141 }, 00:33:03.141 { 00:33:03.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.141 "dma_device_type": 2 00:33:03.141 } 00:33:03.141 ], 00:33:03.141 "driver_specific": {} 00:33:03.141 } 00:33:03.141 ] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 BaseBdev3 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 [ 00:33:03.141 { 00:33:03.141 "name": "BaseBdev3", 00:33:03.141 "aliases": [ 00:33:03.141 "fcffe773-55fb-456b-bbb0-4842823b4806" 00:33:03.141 ], 00:33:03.141 "product_name": "Malloc disk", 00:33:03.141 "block_size": 512, 00:33:03.141 "num_blocks": 65536, 00:33:03.141 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:03.141 "assigned_rate_limits": { 00:33:03.141 "rw_ios_per_sec": 0, 00:33:03.141 "rw_mbytes_per_sec": 0, 00:33:03.141 "r_mbytes_per_sec": 0, 00:33:03.141 "w_mbytes_per_sec": 0 00:33:03.141 }, 00:33:03.141 "claimed": false, 00:33:03.141 "zoned": false, 00:33:03.141 "supported_io_types": { 00:33:03.141 "read": true, 00:33:03.141 "write": true, 00:33:03.141 "unmap": true, 00:33:03.141 "flush": true, 00:33:03.141 "reset": true, 00:33:03.141 "nvme_admin": false, 00:33:03.141 "nvme_io": false, 00:33:03.141 "nvme_io_md": false, 00:33:03.141 "write_zeroes": true, 00:33:03.141 "zcopy": true, 00:33:03.141 "get_zone_info": false, 00:33:03.141 "zone_management": false, 00:33:03.141 "zone_append": false, 00:33:03.141 "compare": false, 00:33:03.141 "compare_and_write": false, 00:33:03.141 "abort": true, 00:33:03.141 "seek_hole": false, 00:33:03.141 "seek_data": false, 00:33:03.141 "copy": true, 00:33:03.141 "nvme_iov_md": false 00:33:03.141 }, 00:33:03.141 "memory_domains": [ 00:33:03.141 { 00:33:03.141 "dma_device_id": "system", 00:33:03.141 "dma_device_type": 1 00:33:03.141 }, 00:33:03.141 { 00:33:03.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.141 "dma_device_type": 2 00:33:03.141 } 00:33:03.141 ], 00:33:03.141 "driver_specific": {} 00:33:03.141 } 00:33:03.141 ] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 BaseBdev4 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.141 [ 00:33:03.141 { 00:33:03.141 "name": "BaseBdev4", 00:33:03.141 "aliases": [ 00:33:03.141 "d5320337-b3bc-4f56-bc3d-bce498b32057" 00:33:03.141 ], 00:33:03.141 "product_name": "Malloc disk", 00:33:03.141 "block_size": 512, 00:33:03.141 "num_blocks": 65536, 00:33:03.141 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:03.141 "assigned_rate_limits": { 00:33:03.141 "rw_ios_per_sec": 0, 00:33:03.141 "rw_mbytes_per_sec": 0, 00:33:03.141 "r_mbytes_per_sec": 0, 00:33:03.141 "w_mbytes_per_sec": 0 00:33:03.141 }, 00:33:03.141 "claimed": false, 00:33:03.141 "zoned": false, 00:33:03.141 "supported_io_types": { 00:33:03.141 "read": true, 00:33:03.141 "write": true, 00:33:03.141 "unmap": true, 00:33:03.141 "flush": true, 00:33:03.141 "reset": true, 00:33:03.141 "nvme_admin": false, 00:33:03.141 "nvme_io": false, 00:33:03.141 "nvme_io_md": false, 00:33:03.141 "write_zeroes": true, 00:33:03.141 "zcopy": true, 00:33:03.141 "get_zone_info": false, 00:33:03.141 "zone_management": false, 00:33:03.141 "zone_append": false, 00:33:03.141 "compare": false, 00:33:03.141 "compare_and_write": false, 00:33:03.141 "abort": true, 00:33:03.141 "seek_hole": false, 00:33:03.141 "seek_data": false, 00:33:03.141 "copy": true, 00:33:03.141 "nvme_iov_md": false 00:33:03.141 }, 00:33:03.141 "memory_domains": [ 00:33:03.141 { 00:33:03.141 "dma_device_id": "system", 00:33:03.141 "dma_device_type": 1 00:33:03.141 }, 00:33:03.141 { 00:33:03.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.141 "dma_device_type": 2 00:33:03.141 } 00:33:03.141 ], 00:33:03.141 "driver_specific": {} 00:33:03.141 } 00:33:03.141 ] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:03.141 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.142 [2024-10-09 14:02:09.592441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:03.142 [2024-10-09 14:02:09.592698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:03.142 [2024-10-09 14:02:09.592843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:03.142 [2024-10-09 14:02:09.596276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:03.142 [2024-10-09 14:02:09.596492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.142 "name": "Existed_Raid", 00:33:03.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.142 "strip_size_kb": 0, 00:33:03.142 "state": "configuring", 00:33:03.142 "raid_level": "raid1", 00:33:03.142 "superblock": false, 00:33:03.142 "num_base_bdevs": 4, 00:33:03.142 "num_base_bdevs_discovered": 3, 00:33:03.142 "num_base_bdevs_operational": 4, 00:33:03.142 "base_bdevs_list": [ 00:33:03.142 { 00:33:03.142 "name": "BaseBdev1", 00:33:03.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.142 "is_configured": false, 00:33:03.142 "data_offset": 0, 00:33:03.142 "data_size": 0 00:33:03.142 }, 00:33:03.142 { 00:33:03.142 "name": "BaseBdev2", 00:33:03.142 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:03.142 "is_configured": true, 00:33:03.142 "data_offset": 0, 00:33:03.142 "data_size": 65536 00:33:03.142 }, 00:33:03.142 { 00:33:03.142 "name": "BaseBdev3", 00:33:03.142 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:03.142 "is_configured": true, 00:33:03.142 "data_offset": 0, 00:33:03.142 "data_size": 65536 00:33:03.142 }, 00:33:03.142 { 00:33:03.142 "name": "BaseBdev4", 00:33:03.142 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:03.142 "is_configured": true, 00:33:03.142 "data_offset": 0, 00:33:03.142 "data_size": 65536 00:33:03.142 } 00:33:03.142 ] 00:33:03.142 }' 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.142 14:02:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.734 [2024-10-09 14:02:10.044944] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:03.734 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.734 "name": "Existed_Raid", 00:33:03.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.734 "strip_size_kb": 0, 00:33:03.734 "state": "configuring", 00:33:03.734 "raid_level": "raid1", 00:33:03.734 "superblock": false, 00:33:03.734 "num_base_bdevs": 4, 00:33:03.734 "num_base_bdevs_discovered": 2, 00:33:03.734 "num_base_bdevs_operational": 4, 00:33:03.735 "base_bdevs_list": [ 00:33:03.735 { 00:33:03.735 "name": "BaseBdev1", 00:33:03.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.735 "is_configured": false, 00:33:03.735 "data_offset": 0, 00:33:03.735 "data_size": 0 00:33:03.735 }, 00:33:03.735 { 00:33:03.735 "name": null, 00:33:03.735 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:03.735 "is_configured": false, 00:33:03.735 "data_offset": 0, 00:33:03.735 "data_size": 65536 00:33:03.735 }, 00:33:03.735 { 00:33:03.735 "name": "BaseBdev3", 00:33:03.735 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:03.735 "is_configured": true, 00:33:03.735 "data_offset": 0, 00:33:03.735 "data_size": 65536 00:33:03.735 }, 00:33:03.735 { 00:33:03.735 "name": "BaseBdev4", 00:33:03.735 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:03.735 "is_configured": true, 00:33:03.735 "data_offset": 0, 00:33:03.735 "data_size": 65536 00:33:03.735 } 00:33:03.735 ] 00:33:03.735 }' 00:33:03.735 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.735 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.992 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.992 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:03.992 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:03.992 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:03.992 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.250 [2024-10-09 14:02:10.556420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:04.250 BaseBdev1 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.250 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.250 [ 00:33:04.250 { 00:33:04.250 "name": "BaseBdev1", 00:33:04.250 "aliases": [ 00:33:04.250 "676b39f2-0d23-4785-b357-6a5986b18708" 00:33:04.250 ], 00:33:04.250 "product_name": "Malloc disk", 00:33:04.250 "block_size": 512, 00:33:04.250 "num_blocks": 65536, 00:33:04.250 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:04.250 "assigned_rate_limits": { 00:33:04.250 "rw_ios_per_sec": 0, 00:33:04.250 "rw_mbytes_per_sec": 0, 00:33:04.250 "r_mbytes_per_sec": 0, 00:33:04.250 "w_mbytes_per_sec": 0 00:33:04.250 }, 00:33:04.250 "claimed": true, 00:33:04.250 "claim_type": "exclusive_write", 00:33:04.250 "zoned": false, 00:33:04.250 "supported_io_types": { 00:33:04.250 "read": true, 00:33:04.250 "write": true, 00:33:04.250 "unmap": true, 00:33:04.250 "flush": true, 00:33:04.250 "reset": true, 00:33:04.250 "nvme_admin": false, 00:33:04.250 "nvme_io": false, 00:33:04.250 "nvme_io_md": false, 00:33:04.250 "write_zeroes": true, 00:33:04.250 "zcopy": true, 00:33:04.250 "get_zone_info": false, 00:33:04.250 "zone_management": false, 00:33:04.250 "zone_append": false, 00:33:04.250 "compare": false, 00:33:04.250 "compare_and_write": false, 00:33:04.250 "abort": true, 00:33:04.250 "seek_hole": false, 00:33:04.250 "seek_data": false, 00:33:04.250 "copy": true, 00:33:04.250 "nvme_iov_md": false 00:33:04.250 }, 00:33:04.250 "memory_domains": [ 00:33:04.250 { 00:33:04.250 "dma_device_id": "system", 00:33:04.251 "dma_device_type": 1 00:33:04.251 }, 00:33:04.251 { 00:33:04.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:04.251 "dma_device_type": 2 00:33:04.251 } 00:33:04.251 ], 00:33:04.251 "driver_specific": {} 00:33:04.251 } 00:33:04.251 ] 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.251 "name": "Existed_Raid", 00:33:04.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.251 "strip_size_kb": 0, 00:33:04.251 "state": "configuring", 00:33:04.251 "raid_level": "raid1", 00:33:04.251 "superblock": false, 00:33:04.251 "num_base_bdevs": 4, 00:33:04.251 "num_base_bdevs_discovered": 3, 00:33:04.251 "num_base_bdevs_operational": 4, 00:33:04.251 "base_bdevs_list": [ 00:33:04.251 { 00:33:04.251 "name": "BaseBdev1", 00:33:04.251 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:04.251 "is_configured": true, 00:33:04.251 "data_offset": 0, 00:33:04.251 "data_size": 65536 00:33:04.251 }, 00:33:04.251 { 00:33:04.251 "name": null, 00:33:04.251 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:04.251 "is_configured": false, 00:33:04.251 "data_offset": 0, 00:33:04.251 "data_size": 65536 00:33:04.251 }, 00:33:04.251 { 00:33:04.251 "name": "BaseBdev3", 00:33:04.251 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:04.251 "is_configured": true, 00:33:04.251 "data_offset": 0, 00:33:04.251 "data_size": 65536 00:33:04.251 }, 00:33:04.251 { 00:33:04.251 "name": "BaseBdev4", 00:33:04.251 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:04.251 "is_configured": true, 00:33:04.251 "data_offset": 0, 00:33:04.251 "data_size": 65536 00:33:04.251 } 00:33:04.251 ] 00:33:04.251 }' 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.251 14:02:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.508 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.508 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:04.508 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.508 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.508 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.767 [2024-10-09 14:02:11.084614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:04.767 "name": "Existed_Raid", 00:33:04.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.767 "strip_size_kb": 0, 00:33:04.767 "state": "configuring", 00:33:04.767 "raid_level": "raid1", 00:33:04.767 "superblock": false, 00:33:04.767 "num_base_bdevs": 4, 00:33:04.767 "num_base_bdevs_discovered": 2, 00:33:04.767 "num_base_bdevs_operational": 4, 00:33:04.767 "base_bdevs_list": [ 00:33:04.767 { 00:33:04.767 "name": "BaseBdev1", 00:33:04.767 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:04.767 "is_configured": true, 00:33:04.767 "data_offset": 0, 00:33:04.767 "data_size": 65536 00:33:04.767 }, 00:33:04.767 { 00:33:04.767 "name": null, 00:33:04.767 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:04.767 "is_configured": false, 00:33:04.767 "data_offset": 0, 00:33:04.767 "data_size": 65536 00:33:04.767 }, 00:33:04.767 { 00:33:04.767 "name": null, 00:33:04.767 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:04.767 "is_configured": false, 00:33:04.767 "data_offset": 0, 00:33:04.767 "data_size": 65536 00:33:04.767 }, 00:33:04.767 { 00:33:04.767 "name": "BaseBdev4", 00:33:04.767 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:04.767 "is_configured": true, 00:33:04.767 "data_offset": 0, 00:33:04.767 "data_size": 65536 00:33:04.767 } 00:33:04.767 ] 00:33:04.767 }' 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:04.767 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.026 [2024-10-09 14:02:11.544787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.026 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.285 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.285 "name": "Existed_Raid", 00:33:05.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.285 "strip_size_kb": 0, 00:33:05.285 "state": "configuring", 00:33:05.285 "raid_level": "raid1", 00:33:05.285 "superblock": false, 00:33:05.285 "num_base_bdevs": 4, 00:33:05.285 "num_base_bdevs_discovered": 3, 00:33:05.285 "num_base_bdevs_operational": 4, 00:33:05.285 "base_bdevs_list": [ 00:33:05.285 { 00:33:05.285 "name": "BaseBdev1", 00:33:05.285 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:05.285 "is_configured": true, 00:33:05.285 "data_offset": 0, 00:33:05.285 "data_size": 65536 00:33:05.285 }, 00:33:05.285 { 00:33:05.285 "name": null, 00:33:05.285 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:05.285 "is_configured": false, 00:33:05.285 "data_offset": 0, 00:33:05.285 "data_size": 65536 00:33:05.285 }, 00:33:05.285 { 00:33:05.285 "name": "BaseBdev3", 00:33:05.285 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:05.285 "is_configured": true, 00:33:05.285 "data_offset": 0, 00:33:05.285 "data_size": 65536 00:33:05.285 }, 00:33:05.285 { 00:33:05.285 "name": "BaseBdev4", 00:33:05.286 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:05.286 "is_configured": true, 00:33:05.286 "data_offset": 0, 00:33:05.286 "data_size": 65536 00:33:05.286 } 00:33:05.286 ] 00:33:05.286 }' 00:33:05.286 14:02:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.286 14:02:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.544 [2024-10-09 14:02:12.056911] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.544 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:05.803 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.803 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.803 "name": "Existed_Raid", 00:33:05.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.803 "strip_size_kb": 0, 00:33:05.803 "state": "configuring", 00:33:05.803 "raid_level": "raid1", 00:33:05.803 "superblock": false, 00:33:05.803 "num_base_bdevs": 4, 00:33:05.803 "num_base_bdevs_discovered": 2, 00:33:05.803 "num_base_bdevs_operational": 4, 00:33:05.803 "base_bdevs_list": [ 00:33:05.803 { 00:33:05.803 "name": null, 00:33:05.803 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:05.803 "is_configured": false, 00:33:05.803 "data_offset": 0, 00:33:05.803 "data_size": 65536 00:33:05.803 }, 00:33:05.803 { 00:33:05.803 "name": null, 00:33:05.803 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:05.803 "is_configured": false, 00:33:05.803 "data_offset": 0, 00:33:05.803 "data_size": 65536 00:33:05.803 }, 00:33:05.803 { 00:33:05.803 "name": "BaseBdev3", 00:33:05.803 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:05.803 "is_configured": true, 00:33:05.803 "data_offset": 0, 00:33:05.803 "data_size": 65536 00:33:05.803 }, 00:33:05.803 { 00:33:05.803 "name": "BaseBdev4", 00:33:05.803 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:05.803 "is_configured": true, 00:33:05.803 "data_offset": 0, 00:33:05.803 "data_size": 65536 00:33:05.803 } 00:33:05.803 ] 00:33:05.803 }' 00:33:05.804 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.804 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.062 [2024-10-09 14:02:12.559586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.062 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.063 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.321 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.321 "name": "Existed_Raid", 00:33:06.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.321 "strip_size_kb": 0, 00:33:06.321 "state": "configuring", 00:33:06.321 "raid_level": "raid1", 00:33:06.321 "superblock": false, 00:33:06.321 "num_base_bdevs": 4, 00:33:06.321 "num_base_bdevs_discovered": 3, 00:33:06.321 "num_base_bdevs_operational": 4, 00:33:06.321 "base_bdevs_list": [ 00:33:06.321 { 00:33:06.321 "name": null, 00:33:06.321 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:06.321 "is_configured": false, 00:33:06.321 "data_offset": 0, 00:33:06.321 "data_size": 65536 00:33:06.321 }, 00:33:06.321 { 00:33:06.321 "name": "BaseBdev2", 00:33:06.321 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:06.321 "is_configured": true, 00:33:06.321 "data_offset": 0, 00:33:06.321 "data_size": 65536 00:33:06.321 }, 00:33:06.321 { 00:33:06.321 "name": "BaseBdev3", 00:33:06.321 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:06.321 "is_configured": true, 00:33:06.321 "data_offset": 0, 00:33:06.321 "data_size": 65536 00:33:06.321 }, 00:33:06.321 { 00:33:06.321 "name": "BaseBdev4", 00:33:06.321 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:06.321 "is_configured": true, 00:33:06.321 "data_offset": 0, 00:33:06.321 "data_size": 65536 00:33:06.321 } 00:33:06.321 ] 00:33:06.321 }' 00:33:06.321 14:02:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.321 14:02:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 676b39f2-0d23-4785-b357-6a5986b18708 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.579 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.579 [2024-10-09 14:02:13.118691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:06.580 [2024-10-09 14:02:13.118887] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:33:06.580 [2024-10-09 14:02:13.118909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:06.580 [2024-10-09 14:02:13.119184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:06.580 [2024-10-09 14:02:13.119323] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:33:06.580 [2024-10-09 14:02:13.119334] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:33:06.580 [2024-10-09 14:02:13.119514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:06.580 NewBaseBdev 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.580 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.837 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.837 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:06.837 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.837 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.837 [ 00:33:06.837 { 00:33:06.837 "name": "NewBaseBdev", 00:33:06.837 "aliases": [ 00:33:06.837 "676b39f2-0d23-4785-b357-6a5986b18708" 00:33:06.837 ], 00:33:06.837 "product_name": "Malloc disk", 00:33:06.837 "block_size": 512, 00:33:06.837 "num_blocks": 65536, 00:33:06.837 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:06.837 "assigned_rate_limits": { 00:33:06.837 "rw_ios_per_sec": 0, 00:33:06.837 "rw_mbytes_per_sec": 0, 00:33:06.837 "r_mbytes_per_sec": 0, 00:33:06.837 "w_mbytes_per_sec": 0 00:33:06.837 }, 00:33:06.837 "claimed": true, 00:33:06.837 "claim_type": "exclusive_write", 00:33:06.837 "zoned": false, 00:33:06.837 "supported_io_types": { 00:33:06.837 "read": true, 00:33:06.837 "write": true, 00:33:06.837 "unmap": true, 00:33:06.837 "flush": true, 00:33:06.837 "reset": true, 00:33:06.837 "nvme_admin": false, 00:33:06.837 "nvme_io": false, 00:33:06.837 "nvme_io_md": false, 00:33:06.837 "write_zeroes": true, 00:33:06.837 "zcopy": true, 00:33:06.837 "get_zone_info": false, 00:33:06.837 "zone_management": false, 00:33:06.837 "zone_append": false, 00:33:06.837 "compare": false, 00:33:06.837 "compare_and_write": false, 00:33:06.837 "abort": true, 00:33:06.837 "seek_hole": false, 00:33:06.837 "seek_data": false, 00:33:06.837 "copy": true, 00:33:06.837 "nvme_iov_md": false 00:33:06.837 }, 00:33:06.837 "memory_domains": [ 00:33:06.837 { 00:33:06.838 "dma_device_id": "system", 00:33:06.838 "dma_device_type": 1 00:33:06.838 }, 00:33:06.838 { 00:33:06.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.838 "dma_device_type": 2 00:33:06.838 } 00:33:06.838 ], 00:33:06.838 "driver_specific": {} 00:33:06.838 } 00:33:06.838 ] 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:06.838 "name": "Existed_Raid", 00:33:06.838 "uuid": "cc24742e-3dd6-41c9-ae9e-e4966c7fc75f", 00:33:06.838 "strip_size_kb": 0, 00:33:06.838 "state": "online", 00:33:06.838 "raid_level": "raid1", 00:33:06.838 "superblock": false, 00:33:06.838 "num_base_bdevs": 4, 00:33:06.838 "num_base_bdevs_discovered": 4, 00:33:06.838 "num_base_bdevs_operational": 4, 00:33:06.838 "base_bdevs_list": [ 00:33:06.838 { 00:33:06.838 "name": "NewBaseBdev", 00:33:06.838 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:06.838 "is_configured": true, 00:33:06.838 "data_offset": 0, 00:33:06.838 "data_size": 65536 00:33:06.838 }, 00:33:06.838 { 00:33:06.838 "name": "BaseBdev2", 00:33:06.838 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:06.838 "is_configured": true, 00:33:06.838 "data_offset": 0, 00:33:06.838 "data_size": 65536 00:33:06.838 }, 00:33:06.838 { 00:33:06.838 "name": "BaseBdev3", 00:33:06.838 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:06.838 "is_configured": true, 00:33:06.838 "data_offset": 0, 00:33:06.838 "data_size": 65536 00:33:06.838 }, 00:33:06.838 { 00:33:06.838 "name": "BaseBdev4", 00:33:06.838 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:06.838 "is_configured": true, 00:33:06.838 "data_offset": 0, 00:33:06.838 "data_size": 65536 00:33:06.838 } 00:33:06.838 ] 00:33:06.838 }' 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:06.838 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.096 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.356 [2024-10-09 14:02:13.651173] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:07.356 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.356 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:07.356 "name": "Existed_Raid", 00:33:07.356 "aliases": [ 00:33:07.356 "cc24742e-3dd6-41c9-ae9e-e4966c7fc75f" 00:33:07.356 ], 00:33:07.356 "product_name": "Raid Volume", 00:33:07.356 "block_size": 512, 00:33:07.356 "num_blocks": 65536, 00:33:07.356 "uuid": "cc24742e-3dd6-41c9-ae9e-e4966c7fc75f", 00:33:07.356 "assigned_rate_limits": { 00:33:07.356 "rw_ios_per_sec": 0, 00:33:07.356 "rw_mbytes_per_sec": 0, 00:33:07.356 "r_mbytes_per_sec": 0, 00:33:07.356 "w_mbytes_per_sec": 0 00:33:07.356 }, 00:33:07.356 "claimed": false, 00:33:07.356 "zoned": false, 00:33:07.356 "supported_io_types": { 00:33:07.356 "read": true, 00:33:07.356 "write": true, 00:33:07.356 "unmap": false, 00:33:07.356 "flush": false, 00:33:07.356 "reset": true, 00:33:07.356 "nvme_admin": false, 00:33:07.356 "nvme_io": false, 00:33:07.356 "nvme_io_md": false, 00:33:07.356 "write_zeroes": true, 00:33:07.356 "zcopy": false, 00:33:07.356 "get_zone_info": false, 00:33:07.356 "zone_management": false, 00:33:07.356 "zone_append": false, 00:33:07.356 "compare": false, 00:33:07.356 "compare_and_write": false, 00:33:07.356 "abort": false, 00:33:07.356 "seek_hole": false, 00:33:07.356 "seek_data": false, 00:33:07.356 "copy": false, 00:33:07.356 "nvme_iov_md": false 00:33:07.356 }, 00:33:07.356 "memory_domains": [ 00:33:07.356 { 00:33:07.356 "dma_device_id": "system", 00:33:07.356 "dma_device_type": 1 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.356 "dma_device_type": 2 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "system", 00:33:07.356 "dma_device_type": 1 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.356 "dma_device_type": 2 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "system", 00:33:07.356 "dma_device_type": 1 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.356 "dma_device_type": 2 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "system", 00:33:07.356 "dma_device_type": 1 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.356 "dma_device_type": 2 00:33:07.356 } 00:33:07.356 ], 00:33:07.356 "driver_specific": { 00:33:07.356 "raid": { 00:33:07.356 "uuid": "cc24742e-3dd6-41c9-ae9e-e4966c7fc75f", 00:33:07.356 "strip_size_kb": 0, 00:33:07.356 "state": "online", 00:33:07.356 "raid_level": "raid1", 00:33:07.356 "superblock": false, 00:33:07.356 "num_base_bdevs": 4, 00:33:07.356 "num_base_bdevs_discovered": 4, 00:33:07.356 "num_base_bdevs_operational": 4, 00:33:07.356 "base_bdevs_list": [ 00:33:07.356 { 00:33:07.356 "name": "NewBaseBdev", 00:33:07.356 "uuid": "676b39f2-0d23-4785-b357-6a5986b18708", 00:33:07.356 "is_configured": true, 00:33:07.356 "data_offset": 0, 00:33:07.356 "data_size": 65536 00:33:07.356 }, 00:33:07.356 { 00:33:07.356 "name": "BaseBdev2", 00:33:07.356 "uuid": "10a8b4e9-5f2a-40f8-888c-3d7a9801c7d5", 00:33:07.357 "is_configured": true, 00:33:07.357 "data_offset": 0, 00:33:07.357 "data_size": 65536 00:33:07.357 }, 00:33:07.357 { 00:33:07.357 "name": "BaseBdev3", 00:33:07.357 "uuid": "fcffe773-55fb-456b-bbb0-4842823b4806", 00:33:07.357 "is_configured": true, 00:33:07.357 "data_offset": 0, 00:33:07.357 "data_size": 65536 00:33:07.357 }, 00:33:07.357 { 00:33:07.357 "name": "BaseBdev4", 00:33:07.357 "uuid": "d5320337-b3bc-4f56-bc3d-bce498b32057", 00:33:07.357 "is_configured": true, 00:33:07.357 "data_offset": 0, 00:33:07.357 "data_size": 65536 00:33:07.357 } 00:33:07.357 ] 00:33:07.357 } 00:33:07.357 } 00:33:07.357 }' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:07.357 BaseBdev2 00:33:07.357 BaseBdev3 00:33:07.357 BaseBdev4' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.357 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.616 [2024-10-09 14:02:13.974953] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:07.616 [2024-10-09 14:02:13.974984] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:07.616 [2024-10-09 14:02:13.975068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:07.616 [2024-10-09 14:02:13.975332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:07.616 [2024-10-09 14:02:13.975353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84348 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84348 ']' 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84348 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.616 14:02:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84348 00:33:07.616 killing process with pid 84348 00:33:07.616 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.616 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.616 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84348' 00:33:07.616 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84348 00:33:07.616 [2024-10-09 14:02:14.021893] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:07.616 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84348 00:33:07.616 [2024-10-09 14:02:14.063745] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:07.874 14:02:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:33:07.874 00:33:07.874 real 0m10.039s 00:33:07.874 user 0m17.145s 00:33:07.874 sys 0m2.194s 00:33:07.874 ************************************ 00:33:07.874 END TEST raid_state_function_test 00:33:07.874 ************************************ 00:33:07.874 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.874 14:02:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:07.874 14:02:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:33:07.874 14:02:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:07.874 14:02:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.874 14:02:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:07.874 ************************************ 00:33:07.874 START TEST raid_state_function_test_sb 00:33:07.874 ************************************ 00:33:07.874 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:33:07.875 Process raid pid: 84998 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84998 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84998' 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84998 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84998 ']' 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:07.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:07.875 14:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:08.134 [2024-10-09 14:02:14.503483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:08.134 [2024-10-09 14:02:14.504688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.392 [2024-10-09 14:02:14.685702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.392 [2024-10-09 14:02:14.728509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.392 [2024-10-09 14:02:14.771409] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.392 [2024-10-09 14:02:14.771448] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:08.960 [2024-10-09 14:02:15.366434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:08.960 [2024-10-09 14:02:15.366489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:08.960 [2024-10-09 14:02:15.366504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:08.960 [2024-10-09 14:02:15.366517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:08.960 [2024-10-09 14:02:15.366528] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:08.960 [2024-10-09 14:02:15.366545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:08.960 [2024-10-09 14:02:15.366564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:08.960 [2024-10-09 14:02:15.366577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.960 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:08.960 "name": "Existed_Raid", 00:33:08.960 "uuid": "e5845b1b-9633-4a33-92b1-a02ff2e43cab", 00:33:08.960 "strip_size_kb": 0, 00:33:08.960 "state": "configuring", 00:33:08.960 "raid_level": "raid1", 00:33:08.960 "superblock": true, 00:33:08.960 "num_base_bdevs": 4, 00:33:08.960 "num_base_bdevs_discovered": 0, 00:33:08.960 "num_base_bdevs_operational": 4, 00:33:08.960 "base_bdevs_list": [ 00:33:08.961 { 00:33:08.961 "name": "BaseBdev1", 00:33:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.961 "is_configured": false, 00:33:08.961 "data_offset": 0, 00:33:08.961 "data_size": 0 00:33:08.961 }, 00:33:08.961 { 00:33:08.961 "name": "BaseBdev2", 00:33:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.961 "is_configured": false, 00:33:08.961 "data_offset": 0, 00:33:08.961 "data_size": 0 00:33:08.961 }, 00:33:08.961 { 00:33:08.961 "name": "BaseBdev3", 00:33:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.961 "is_configured": false, 00:33:08.961 "data_offset": 0, 00:33:08.961 "data_size": 0 00:33:08.961 }, 00:33:08.961 { 00:33:08.961 "name": "BaseBdev4", 00:33:08.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:08.961 "is_configured": false, 00:33:08.961 "data_offset": 0, 00:33:08.961 "data_size": 0 00:33:08.961 } 00:33:08.961 ] 00:33:08.961 }' 00:33:08.961 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:08.961 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.527 [2024-10-09 14:02:15.826441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:09.527 [2024-10-09 14:02:15.826489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:09.527 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.528 [2024-10-09 14:02:15.834482] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:09.528 [2024-10-09 14:02:15.834526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:09.528 [2024-10-09 14:02:15.834536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:09.528 [2024-10-09 14:02:15.834561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:09.528 [2024-10-09 14:02:15.834570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:09.528 [2024-10-09 14:02:15.834583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:09.528 [2024-10-09 14:02:15.834590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:09.528 [2024-10-09 14:02:15.834603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.528 [2024-10-09 14:02:15.851896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.528 BaseBdev1 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.528 [ 00:33:09.528 { 00:33:09.528 "name": "BaseBdev1", 00:33:09.528 "aliases": [ 00:33:09.528 "7287664b-17d5-4c30-9598-d2364985a7d3" 00:33:09.528 ], 00:33:09.528 "product_name": "Malloc disk", 00:33:09.528 "block_size": 512, 00:33:09.528 "num_blocks": 65536, 00:33:09.528 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:09.528 "assigned_rate_limits": { 00:33:09.528 "rw_ios_per_sec": 0, 00:33:09.528 "rw_mbytes_per_sec": 0, 00:33:09.528 "r_mbytes_per_sec": 0, 00:33:09.528 "w_mbytes_per_sec": 0 00:33:09.528 }, 00:33:09.528 "claimed": true, 00:33:09.528 "claim_type": "exclusive_write", 00:33:09.528 "zoned": false, 00:33:09.528 "supported_io_types": { 00:33:09.528 "read": true, 00:33:09.528 "write": true, 00:33:09.528 "unmap": true, 00:33:09.528 "flush": true, 00:33:09.528 "reset": true, 00:33:09.528 "nvme_admin": false, 00:33:09.528 "nvme_io": false, 00:33:09.528 "nvme_io_md": false, 00:33:09.528 "write_zeroes": true, 00:33:09.528 "zcopy": true, 00:33:09.528 "get_zone_info": false, 00:33:09.528 "zone_management": false, 00:33:09.528 "zone_append": false, 00:33:09.528 "compare": false, 00:33:09.528 "compare_and_write": false, 00:33:09.528 "abort": true, 00:33:09.528 "seek_hole": false, 00:33:09.528 "seek_data": false, 00:33:09.528 "copy": true, 00:33:09.528 "nvme_iov_md": false 00:33:09.528 }, 00:33:09.528 "memory_domains": [ 00:33:09.528 { 00:33:09.528 "dma_device_id": "system", 00:33:09.528 "dma_device_type": 1 00:33:09.528 }, 00:33:09.528 { 00:33:09.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:09.528 "dma_device_type": 2 00:33:09.528 } 00:33:09.528 ], 00:33:09.528 "driver_specific": {} 00:33:09.528 } 00:33:09.528 ] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.528 "name": "Existed_Raid", 00:33:09.528 "uuid": "71ccd8be-c7c5-4826-be19-a456f39a6e49", 00:33:09.528 "strip_size_kb": 0, 00:33:09.528 "state": "configuring", 00:33:09.528 "raid_level": "raid1", 00:33:09.528 "superblock": true, 00:33:09.528 "num_base_bdevs": 4, 00:33:09.528 "num_base_bdevs_discovered": 1, 00:33:09.528 "num_base_bdevs_operational": 4, 00:33:09.528 "base_bdevs_list": [ 00:33:09.528 { 00:33:09.528 "name": "BaseBdev1", 00:33:09.528 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:09.528 "is_configured": true, 00:33:09.528 "data_offset": 2048, 00:33:09.528 "data_size": 63488 00:33:09.528 }, 00:33:09.528 { 00:33:09.528 "name": "BaseBdev2", 00:33:09.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.528 "is_configured": false, 00:33:09.528 "data_offset": 0, 00:33:09.528 "data_size": 0 00:33:09.528 }, 00:33:09.528 { 00:33:09.528 "name": "BaseBdev3", 00:33:09.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.528 "is_configured": false, 00:33:09.528 "data_offset": 0, 00:33:09.528 "data_size": 0 00:33:09.528 }, 00:33:09.528 { 00:33:09.528 "name": "BaseBdev4", 00:33:09.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.528 "is_configured": false, 00:33:09.528 "data_offset": 0, 00:33:09.528 "data_size": 0 00:33:09.528 } 00:33:09.528 ] 00:33:09.528 }' 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.528 14:02:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.786 [2024-10-09 14:02:16.316043] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:09.786 [2024-10-09 14:02:16.316102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.786 [2024-10-09 14:02:16.324096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:09.786 [2024-10-09 14:02:16.326346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:09.786 [2024-10-09 14:02:16.326394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:09.786 [2024-10-09 14:02:16.326405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:09.786 [2024-10-09 14:02:16.326418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:09.786 [2024-10-09 14:02:16.326426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:09.786 [2024-10-09 14:02:16.326438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.786 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.787 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.787 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.787 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:09.787 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.046 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.046 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.046 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.046 "name": "Existed_Raid", 00:33:10.046 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:10.046 "strip_size_kb": 0, 00:33:10.046 "state": "configuring", 00:33:10.046 "raid_level": "raid1", 00:33:10.046 "superblock": true, 00:33:10.046 "num_base_bdevs": 4, 00:33:10.046 "num_base_bdevs_discovered": 1, 00:33:10.046 "num_base_bdevs_operational": 4, 00:33:10.046 "base_bdevs_list": [ 00:33:10.046 { 00:33:10.046 "name": "BaseBdev1", 00:33:10.046 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:10.046 "is_configured": true, 00:33:10.046 "data_offset": 2048, 00:33:10.046 "data_size": 63488 00:33:10.046 }, 00:33:10.046 { 00:33:10.046 "name": "BaseBdev2", 00:33:10.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.046 "is_configured": false, 00:33:10.046 "data_offset": 0, 00:33:10.046 "data_size": 0 00:33:10.046 }, 00:33:10.046 { 00:33:10.046 "name": "BaseBdev3", 00:33:10.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.046 "is_configured": false, 00:33:10.046 "data_offset": 0, 00:33:10.046 "data_size": 0 00:33:10.046 }, 00:33:10.046 { 00:33:10.046 "name": "BaseBdev4", 00:33:10.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.046 "is_configured": false, 00:33:10.046 "data_offset": 0, 00:33:10.046 "data_size": 0 00:33:10.046 } 00:33:10.046 ] 00:33:10.046 }' 00:33:10.046 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.046 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.305 [2024-10-09 14:02:16.789990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:10.305 BaseBdev2 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.305 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.305 [ 00:33:10.305 { 00:33:10.305 "name": "BaseBdev2", 00:33:10.305 "aliases": [ 00:33:10.305 "31c25647-9c3c-49a5-a72b-22d78e847984" 00:33:10.305 ], 00:33:10.305 "product_name": "Malloc disk", 00:33:10.305 "block_size": 512, 00:33:10.305 "num_blocks": 65536, 00:33:10.305 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:10.305 "assigned_rate_limits": { 00:33:10.305 "rw_ios_per_sec": 0, 00:33:10.305 "rw_mbytes_per_sec": 0, 00:33:10.305 "r_mbytes_per_sec": 0, 00:33:10.305 "w_mbytes_per_sec": 0 00:33:10.305 }, 00:33:10.305 "claimed": true, 00:33:10.305 "claim_type": "exclusive_write", 00:33:10.305 "zoned": false, 00:33:10.305 "supported_io_types": { 00:33:10.305 "read": true, 00:33:10.305 "write": true, 00:33:10.305 "unmap": true, 00:33:10.305 "flush": true, 00:33:10.305 "reset": true, 00:33:10.305 "nvme_admin": false, 00:33:10.305 "nvme_io": false, 00:33:10.305 "nvme_io_md": false, 00:33:10.305 "write_zeroes": true, 00:33:10.305 "zcopy": true, 00:33:10.305 "get_zone_info": false, 00:33:10.305 "zone_management": false, 00:33:10.306 "zone_append": false, 00:33:10.306 "compare": false, 00:33:10.306 "compare_and_write": false, 00:33:10.306 "abort": true, 00:33:10.306 "seek_hole": false, 00:33:10.306 "seek_data": false, 00:33:10.306 "copy": true, 00:33:10.306 "nvme_iov_md": false 00:33:10.306 }, 00:33:10.306 "memory_domains": [ 00:33:10.306 { 00:33:10.306 "dma_device_id": "system", 00:33:10.306 "dma_device_type": 1 00:33:10.306 }, 00:33:10.306 { 00:33:10.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.306 "dma_device_type": 2 00:33:10.306 } 00:33:10.306 ], 00:33:10.306 "driver_specific": {} 00:33:10.306 } 00:33:10.306 ] 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.306 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.564 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.564 "name": "Existed_Raid", 00:33:10.564 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:10.564 "strip_size_kb": 0, 00:33:10.564 "state": "configuring", 00:33:10.564 "raid_level": "raid1", 00:33:10.564 "superblock": true, 00:33:10.564 "num_base_bdevs": 4, 00:33:10.564 "num_base_bdevs_discovered": 2, 00:33:10.564 "num_base_bdevs_operational": 4, 00:33:10.564 "base_bdevs_list": [ 00:33:10.564 { 00:33:10.564 "name": "BaseBdev1", 00:33:10.564 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:10.564 "is_configured": true, 00:33:10.564 "data_offset": 2048, 00:33:10.564 "data_size": 63488 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": "BaseBdev2", 00:33:10.564 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:10.564 "is_configured": true, 00:33:10.564 "data_offset": 2048, 00:33:10.564 "data_size": 63488 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": "BaseBdev3", 00:33:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.564 "is_configured": false, 00:33:10.564 "data_offset": 0, 00:33:10.564 "data_size": 0 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": "BaseBdev4", 00:33:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.564 "is_configured": false, 00:33:10.564 "data_offset": 0, 00:33:10.564 "data_size": 0 00:33:10.564 } 00:33:10.564 ] 00:33:10.564 }' 00:33:10.564 14:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.564 14:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 [2024-10-09 14:02:17.277188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:10.822 BaseBdev3 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 [ 00:33:10.822 { 00:33:10.822 "name": "BaseBdev3", 00:33:10.822 "aliases": [ 00:33:10.822 "e80b9691-4852-41a9-9443-38a99b607fbc" 00:33:10.822 ], 00:33:10.822 "product_name": "Malloc disk", 00:33:10.822 "block_size": 512, 00:33:10.822 "num_blocks": 65536, 00:33:10.822 "uuid": "e80b9691-4852-41a9-9443-38a99b607fbc", 00:33:10.822 "assigned_rate_limits": { 00:33:10.822 "rw_ios_per_sec": 0, 00:33:10.822 "rw_mbytes_per_sec": 0, 00:33:10.822 "r_mbytes_per_sec": 0, 00:33:10.822 "w_mbytes_per_sec": 0 00:33:10.822 }, 00:33:10.822 "claimed": true, 00:33:10.822 "claim_type": "exclusive_write", 00:33:10.822 "zoned": false, 00:33:10.822 "supported_io_types": { 00:33:10.822 "read": true, 00:33:10.822 "write": true, 00:33:10.822 "unmap": true, 00:33:10.822 "flush": true, 00:33:10.822 "reset": true, 00:33:10.822 "nvme_admin": false, 00:33:10.822 "nvme_io": false, 00:33:10.822 "nvme_io_md": false, 00:33:10.822 "write_zeroes": true, 00:33:10.822 "zcopy": true, 00:33:10.822 "get_zone_info": false, 00:33:10.822 "zone_management": false, 00:33:10.822 "zone_append": false, 00:33:10.822 "compare": false, 00:33:10.822 "compare_and_write": false, 00:33:10.822 "abort": true, 00:33:10.822 "seek_hole": false, 00:33:10.822 "seek_data": false, 00:33:10.822 "copy": true, 00:33:10.822 "nvme_iov_md": false 00:33:10.822 }, 00:33:10.822 "memory_domains": [ 00:33:10.822 { 00:33:10.822 "dma_device_id": "system", 00:33:10.822 "dma_device_type": 1 00:33:10.822 }, 00:33:10.822 { 00:33:10.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:10.822 "dma_device_type": 2 00:33:10.822 } 00:33:10.822 ], 00:33:10.822 "driver_specific": {} 00:33:10.822 } 00:33:10.822 ] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.822 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.822 "name": "Existed_Raid", 00:33:10.822 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:10.822 "strip_size_kb": 0, 00:33:10.822 "state": "configuring", 00:33:10.822 "raid_level": "raid1", 00:33:10.822 "superblock": true, 00:33:10.822 "num_base_bdevs": 4, 00:33:10.822 "num_base_bdevs_discovered": 3, 00:33:10.822 "num_base_bdevs_operational": 4, 00:33:10.822 "base_bdevs_list": [ 00:33:10.822 { 00:33:10.823 "name": "BaseBdev1", 00:33:10.823 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:10.823 "is_configured": true, 00:33:10.823 "data_offset": 2048, 00:33:10.823 "data_size": 63488 00:33:10.823 }, 00:33:10.823 { 00:33:10.823 "name": "BaseBdev2", 00:33:10.823 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:10.823 "is_configured": true, 00:33:10.823 "data_offset": 2048, 00:33:10.823 "data_size": 63488 00:33:10.823 }, 00:33:10.823 { 00:33:10.823 "name": "BaseBdev3", 00:33:10.823 "uuid": "e80b9691-4852-41a9-9443-38a99b607fbc", 00:33:10.823 "is_configured": true, 00:33:10.823 "data_offset": 2048, 00:33:10.823 "data_size": 63488 00:33:10.823 }, 00:33:10.823 { 00:33:10.823 "name": "BaseBdev4", 00:33:10.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.823 "is_configured": false, 00:33:10.823 "data_offset": 0, 00:33:10.823 "data_size": 0 00:33:10.823 } 00:33:10.823 ] 00:33:10.823 }' 00:33:10.823 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.823 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 [2024-10-09 14:02:17.728408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:11.388 [2024-10-09 14:02:17.728639] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:33:11.388 [2024-10-09 14:02:17.728656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:11.388 BaseBdev4 00:33:11.388 [2024-10-09 14:02:17.728957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:11.388 [2024-10-09 14:02:17.729091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:33:11.388 [2024-10-09 14:02:17.729105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:33:11.388 [2024-10-09 14:02:17.729214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.388 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.388 [ 00:33:11.388 { 00:33:11.388 "name": "BaseBdev4", 00:33:11.388 "aliases": [ 00:33:11.388 "2af00ca0-d9c9-408c-9ca3-8cd5b35e98da" 00:33:11.388 ], 00:33:11.388 "product_name": "Malloc disk", 00:33:11.388 "block_size": 512, 00:33:11.388 "num_blocks": 65536, 00:33:11.388 "uuid": "2af00ca0-d9c9-408c-9ca3-8cd5b35e98da", 00:33:11.388 "assigned_rate_limits": { 00:33:11.388 "rw_ios_per_sec": 0, 00:33:11.388 "rw_mbytes_per_sec": 0, 00:33:11.388 "r_mbytes_per_sec": 0, 00:33:11.388 "w_mbytes_per_sec": 0 00:33:11.388 }, 00:33:11.388 "claimed": true, 00:33:11.388 "claim_type": "exclusive_write", 00:33:11.388 "zoned": false, 00:33:11.388 "supported_io_types": { 00:33:11.388 "read": true, 00:33:11.388 "write": true, 00:33:11.388 "unmap": true, 00:33:11.388 "flush": true, 00:33:11.388 "reset": true, 00:33:11.388 "nvme_admin": false, 00:33:11.388 "nvme_io": false, 00:33:11.388 "nvme_io_md": false, 00:33:11.388 "write_zeroes": true, 00:33:11.388 "zcopy": true, 00:33:11.388 "get_zone_info": false, 00:33:11.388 "zone_management": false, 00:33:11.388 "zone_append": false, 00:33:11.388 "compare": false, 00:33:11.388 "compare_and_write": false, 00:33:11.388 "abort": true, 00:33:11.388 "seek_hole": false, 00:33:11.388 "seek_data": false, 00:33:11.388 "copy": true, 00:33:11.388 "nvme_iov_md": false 00:33:11.388 }, 00:33:11.388 "memory_domains": [ 00:33:11.388 { 00:33:11.388 "dma_device_id": "system", 00:33:11.388 "dma_device_type": 1 00:33:11.388 }, 00:33:11.388 { 00:33:11.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.389 "dma_device_type": 2 00:33:11.389 } 00:33:11.389 ], 00:33:11.389 "driver_specific": {} 00:33:11.389 } 00:33:11.389 ] 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:11.389 "name": "Existed_Raid", 00:33:11.389 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:11.389 "strip_size_kb": 0, 00:33:11.389 "state": "online", 00:33:11.389 "raid_level": "raid1", 00:33:11.389 "superblock": true, 00:33:11.389 "num_base_bdevs": 4, 00:33:11.389 "num_base_bdevs_discovered": 4, 00:33:11.389 "num_base_bdevs_operational": 4, 00:33:11.389 "base_bdevs_list": [ 00:33:11.389 { 00:33:11.389 "name": "BaseBdev1", 00:33:11.389 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:11.389 "is_configured": true, 00:33:11.389 "data_offset": 2048, 00:33:11.389 "data_size": 63488 00:33:11.389 }, 00:33:11.389 { 00:33:11.389 "name": "BaseBdev2", 00:33:11.389 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:11.389 "is_configured": true, 00:33:11.389 "data_offset": 2048, 00:33:11.389 "data_size": 63488 00:33:11.389 }, 00:33:11.389 { 00:33:11.389 "name": "BaseBdev3", 00:33:11.389 "uuid": "e80b9691-4852-41a9-9443-38a99b607fbc", 00:33:11.389 "is_configured": true, 00:33:11.389 "data_offset": 2048, 00:33:11.389 "data_size": 63488 00:33:11.389 }, 00:33:11.389 { 00:33:11.389 "name": "BaseBdev4", 00:33:11.389 "uuid": "2af00ca0-d9c9-408c-9ca3-8cd5b35e98da", 00:33:11.389 "is_configured": true, 00:33:11.389 "data_offset": 2048, 00:33:11.389 "data_size": 63488 00:33:11.389 } 00:33:11.389 ] 00:33:11.389 }' 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:11.389 14:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:11.955 [2024-10-09 14:02:18.204930] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.955 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:11.955 "name": "Existed_Raid", 00:33:11.955 "aliases": [ 00:33:11.955 "16c6a343-599f-487f-a59b-e93aa620c706" 00:33:11.955 ], 00:33:11.955 "product_name": "Raid Volume", 00:33:11.955 "block_size": 512, 00:33:11.955 "num_blocks": 63488, 00:33:11.955 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:11.955 "assigned_rate_limits": { 00:33:11.955 "rw_ios_per_sec": 0, 00:33:11.955 "rw_mbytes_per_sec": 0, 00:33:11.955 "r_mbytes_per_sec": 0, 00:33:11.955 "w_mbytes_per_sec": 0 00:33:11.955 }, 00:33:11.955 "claimed": false, 00:33:11.955 "zoned": false, 00:33:11.955 "supported_io_types": { 00:33:11.955 "read": true, 00:33:11.955 "write": true, 00:33:11.955 "unmap": false, 00:33:11.955 "flush": false, 00:33:11.955 "reset": true, 00:33:11.955 "nvme_admin": false, 00:33:11.955 "nvme_io": false, 00:33:11.955 "nvme_io_md": false, 00:33:11.955 "write_zeroes": true, 00:33:11.955 "zcopy": false, 00:33:11.955 "get_zone_info": false, 00:33:11.955 "zone_management": false, 00:33:11.955 "zone_append": false, 00:33:11.955 "compare": false, 00:33:11.955 "compare_and_write": false, 00:33:11.955 "abort": false, 00:33:11.955 "seek_hole": false, 00:33:11.955 "seek_data": false, 00:33:11.955 "copy": false, 00:33:11.955 "nvme_iov_md": false 00:33:11.955 }, 00:33:11.955 "memory_domains": [ 00:33:11.955 { 00:33:11.955 "dma_device_id": "system", 00:33:11.955 "dma_device_type": 1 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.955 "dma_device_type": 2 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "system", 00:33:11.955 "dma_device_type": 1 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.955 "dma_device_type": 2 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "system", 00:33:11.955 "dma_device_type": 1 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.955 "dma_device_type": 2 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "system", 00:33:11.955 "dma_device_type": 1 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:11.955 "dma_device_type": 2 00:33:11.955 } 00:33:11.955 ], 00:33:11.955 "driver_specific": { 00:33:11.955 "raid": { 00:33:11.955 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:11.955 "strip_size_kb": 0, 00:33:11.955 "state": "online", 00:33:11.955 "raid_level": "raid1", 00:33:11.955 "superblock": true, 00:33:11.955 "num_base_bdevs": 4, 00:33:11.955 "num_base_bdevs_discovered": 4, 00:33:11.955 "num_base_bdevs_operational": 4, 00:33:11.955 "base_bdevs_list": [ 00:33:11.955 { 00:33:11.955 "name": "BaseBdev1", 00:33:11.955 "uuid": "7287664b-17d5-4c30-9598-d2364985a7d3", 00:33:11.955 "is_configured": true, 00:33:11.955 "data_offset": 2048, 00:33:11.955 "data_size": 63488 00:33:11.955 }, 00:33:11.955 { 00:33:11.955 "name": "BaseBdev2", 00:33:11.955 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:11.955 "is_configured": true, 00:33:11.955 "data_offset": 2048, 00:33:11.956 "data_size": 63488 00:33:11.956 }, 00:33:11.956 { 00:33:11.956 "name": "BaseBdev3", 00:33:11.956 "uuid": "e80b9691-4852-41a9-9443-38a99b607fbc", 00:33:11.956 "is_configured": true, 00:33:11.956 "data_offset": 2048, 00:33:11.956 "data_size": 63488 00:33:11.956 }, 00:33:11.956 { 00:33:11.956 "name": "BaseBdev4", 00:33:11.956 "uuid": "2af00ca0-d9c9-408c-9ca3-8cd5b35e98da", 00:33:11.956 "is_configured": true, 00:33:11.956 "data_offset": 2048, 00:33:11.956 "data_size": 63488 00:33:11.956 } 00:33:11.956 ] 00:33:11.956 } 00:33:11.956 } 00:33:11.956 }' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:11.956 BaseBdev2 00:33:11.956 BaseBdev3 00:33:11.956 BaseBdev4' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.956 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.214 [2024-10-09 14:02:18.532696] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.214 "name": "Existed_Raid", 00:33:12.214 "uuid": "16c6a343-599f-487f-a59b-e93aa620c706", 00:33:12.214 "strip_size_kb": 0, 00:33:12.214 "state": "online", 00:33:12.214 "raid_level": "raid1", 00:33:12.214 "superblock": true, 00:33:12.214 "num_base_bdevs": 4, 00:33:12.214 "num_base_bdevs_discovered": 3, 00:33:12.214 "num_base_bdevs_operational": 3, 00:33:12.214 "base_bdevs_list": [ 00:33:12.214 { 00:33:12.214 "name": null, 00:33:12.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.214 "is_configured": false, 00:33:12.214 "data_offset": 0, 00:33:12.214 "data_size": 63488 00:33:12.214 }, 00:33:12.214 { 00:33:12.214 "name": "BaseBdev2", 00:33:12.214 "uuid": "31c25647-9c3c-49a5-a72b-22d78e847984", 00:33:12.214 "is_configured": true, 00:33:12.214 "data_offset": 2048, 00:33:12.214 "data_size": 63488 00:33:12.214 }, 00:33:12.214 { 00:33:12.214 "name": "BaseBdev3", 00:33:12.214 "uuid": "e80b9691-4852-41a9-9443-38a99b607fbc", 00:33:12.214 "is_configured": true, 00:33:12.214 "data_offset": 2048, 00:33:12.214 "data_size": 63488 00:33:12.214 }, 00:33:12.214 { 00:33:12.214 "name": "BaseBdev4", 00:33:12.214 "uuid": "2af00ca0-d9c9-408c-9ca3-8cd5b35e98da", 00:33:12.214 "is_configured": true, 00:33:12.214 "data_offset": 2048, 00:33:12.214 "data_size": 63488 00:33:12.214 } 00:33:12.214 ] 00:33:12.214 }' 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.214 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.472 14:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.472 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 [2024-10-09 14:02:19.045020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 [2024-10-09 14:02:19.109175] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 [2024-10-09 14:02:19.173354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:12.730 [2024-10-09 14:02:19.173456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:12.730 [2024-10-09 14:02:19.185878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.730 [2024-10-09 14:02:19.185938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.730 [2024-10-09 14:02:19.185954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.730 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.730 [ 00:33:12.730 { 00:33:12.730 "name": "BaseBdev2", 00:33:12.730 "aliases": [ 00:33:12.730 "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf" 00:33:12.730 ], 00:33:12.730 "product_name": "Malloc disk", 00:33:12.730 "block_size": 512, 00:33:12.730 "num_blocks": 65536, 00:33:12.730 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:12.730 "assigned_rate_limits": { 00:33:12.730 "rw_ios_per_sec": 0, 00:33:12.731 "rw_mbytes_per_sec": 0, 00:33:12.731 "r_mbytes_per_sec": 0, 00:33:12.731 "w_mbytes_per_sec": 0 00:33:12.989 }, 00:33:12.989 "claimed": false, 00:33:12.989 "zoned": false, 00:33:12.989 "supported_io_types": { 00:33:12.989 "read": true, 00:33:12.989 "write": true, 00:33:12.989 "unmap": true, 00:33:12.989 "flush": true, 00:33:12.989 "reset": true, 00:33:12.989 "nvme_admin": false, 00:33:12.989 "nvme_io": false, 00:33:12.989 "nvme_io_md": false, 00:33:12.989 "write_zeroes": true, 00:33:12.989 "zcopy": true, 00:33:12.989 "get_zone_info": false, 00:33:12.989 "zone_management": false, 00:33:12.989 "zone_append": false, 00:33:12.989 "compare": false, 00:33:12.989 "compare_and_write": false, 00:33:12.989 "abort": true, 00:33:12.989 "seek_hole": false, 00:33:12.989 "seek_data": false, 00:33:12.989 "copy": true, 00:33:12.989 "nvme_iov_md": false 00:33:12.989 }, 00:33:12.989 "memory_domains": [ 00:33:12.989 { 00:33:12.989 "dma_device_id": "system", 00:33:12.989 "dma_device_type": 1 00:33:12.989 }, 00:33:12.989 { 00:33:12.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.989 "dma_device_type": 2 00:33:12.989 } 00:33:12.989 ], 00:33:12.989 "driver_specific": {} 00:33:12.989 } 00:33:12.989 ] 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.989 BaseBdev3 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.989 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 [ 00:33:12.990 { 00:33:12.990 "name": "BaseBdev3", 00:33:12.990 "aliases": [ 00:33:12.990 "120b3620-14c5-4183-ba82-7890dda42980" 00:33:12.990 ], 00:33:12.990 "product_name": "Malloc disk", 00:33:12.990 "block_size": 512, 00:33:12.990 "num_blocks": 65536, 00:33:12.990 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:12.990 "assigned_rate_limits": { 00:33:12.990 "rw_ios_per_sec": 0, 00:33:12.990 "rw_mbytes_per_sec": 0, 00:33:12.990 "r_mbytes_per_sec": 0, 00:33:12.990 "w_mbytes_per_sec": 0 00:33:12.990 }, 00:33:12.990 "claimed": false, 00:33:12.990 "zoned": false, 00:33:12.990 "supported_io_types": { 00:33:12.990 "read": true, 00:33:12.990 "write": true, 00:33:12.990 "unmap": true, 00:33:12.990 "flush": true, 00:33:12.990 "reset": true, 00:33:12.990 "nvme_admin": false, 00:33:12.990 "nvme_io": false, 00:33:12.990 "nvme_io_md": false, 00:33:12.990 "write_zeroes": true, 00:33:12.990 "zcopy": true, 00:33:12.990 "get_zone_info": false, 00:33:12.990 "zone_management": false, 00:33:12.990 "zone_append": false, 00:33:12.990 "compare": false, 00:33:12.990 "compare_and_write": false, 00:33:12.990 "abort": true, 00:33:12.990 "seek_hole": false, 00:33:12.990 "seek_data": false, 00:33:12.990 "copy": true, 00:33:12.990 "nvme_iov_md": false 00:33:12.990 }, 00:33:12.990 "memory_domains": [ 00:33:12.990 { 00:33:12.990 "dma_device_id": "system", 00:33:12.990 "dma_device_type": 1 00:33:12.990 }, 00:33:12.990 { 00:33:12.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.990 "dma_device_type": 2 00:33:12.990 } 00:33:12.990 ], 00:33:12.990 "driver_specific": {} 00:33:12.990 } 00:33:12.990 ] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 BaseBdev4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 [ 00:33:12.990 { 00:33:12.990 "name": "BaseBdev4", 00:33:12.990 "aliases": [ 00:33:12.990 "c5d1c232-7caa-4bbe-aa7b-576e945f8976" 00:33:12.990 ], 00:33:12.990 "product_name": "Malloc disk", 00:33:12.990 "block_size": 512, 00:33:12.990 "num_blocks": 65536, 00:33:12.990 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:12.990 "assigned_rate_limits": { 00:33:12.990 "rw_ios_per_sec": 0, 00:33:12.990 "rw_mbytes_per_sec": 0, 00:33:12.990 "r_mbytes_per_sec": 0, 00:33:12.990 "w_mbytes_per_sec": 0 00:33:12.990 }, 00:33:12.990 "claimed": false, 00:33:12.990 "zoned": false, 00:33:12.990 "supported_io_types": { 00:33:12.990 "read": true, 00:33:12.990 "write": true, 00:33:12.990 "unmap": true, 00:33:12.990 "flush": true, 00:33:12.990 "reset": true, 00:33:12.990 "nvme_admin": false, 00:33:12.990 "nvme_io": false, 00:33:12.990 "nvme_io_md": false, 00:33:12.990 "write_zeroes": true, 00:33:12.990 "zcopy": true, 00:33:12.990 "get_zone_info": false, 00:33:12.990 "zone_management": false, 00:33:12.990 "zone_append": false, 00:33:12.990 "compare": false, 00:33:12.990 "compare_and_write": false, 00:33:12.990 "abort": true, 00:33:12.990 "seek_hole": false, 00:33:12.990 "seek_data": false, 00:33:12.990 "copy": true, 00:33:12.990 "nvme_iov_md": false 00:33:12.990 }, 00:33:12.990 "memory_domains": [ 00:33:12.990 { 00:33:12.990 "dma_device_id": "system", 00:33:12.990 "dma_device_type": 1 00:33:12.990 }, 00:33:12.990 { 00:33:12.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.990 "dma_device_type": 2 00:33:12.990 } 00:33:12.990 ], 00:33:12.990 "driver_specific": {} 00:33:12.990 } 00:33:12.990 ] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 [2024-10-09 14:02:19.396423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:12.990 [2024-10-09 14:02:19.396603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:12.990 [2024-10-09 14:02:19.396633] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:12.990 [2024-10-09 14:02:19.398831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:12.990 [2024-10-09 14:02:19.398876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.990 "name": "Existed_Raid", 00:33:12.990 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:12.990 "strip_size_kb": 0, 00:33:12.990 "state": "configuring", 00:33:12.990 "raid_level": "raid1", 00:33:12.990 "superblock": true, 00:33:12.990 "num_base_bdevs": 4, 00:33:12.990 "num_base_bdevs_discovered": 3, 00:33:12.990 "num_base_bdevs_operational": 4, 00:33:12.990 "base_bdevs_list": [ 00:33:12.990 { 00:33:12.990 "name": "BaseBdev1", 00:33:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.990 "is_configured": false, 00:33:12.990 "data_offset": 0, 00:33:12.990 "data_size": 0 00:33:12.990 }, 00:33:12.990 { 00:33:12.990 "name": "BaseBdev2", 00:33:12.990 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:12.990 "is_configured": true, 00:33:12.990 "data_offset": 2048, 00:33:12.990 "data_size": 63488 00:33:12.990 }, 00:33:12.990 { 00:33:12.990 "name": "BaseBdev3", 00:33:12.990 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:12.990 "is_configured": true, 00:33:12.990 "data_offset": 2048, 00:33:12.990 "data_size": 63488 00:33:12.990 }, 00:33:12.990 { 00:33:12.990 "name": "BaseBdev4", 00:33:12.990 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:12.990 "is_configured": true, 00:33:12.990 "data_offset": 2048, 00:33:12.990 "data_size": 63488 00:33:12.990 } 00:33:12.990 ] 00:33:12.990 }' 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.990 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 [2024-10-09 14:02:19.856521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:13.556 "name": "Existed_Raid", 00:33:13.556 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:13.556 "strip_size_kb": 0, 00:33:13.556 "state": "configuring", 00:33:13.556 "raid_level": "raid1", 00:33:13.556 "superblock": true, 00:33:13.556 "num_base_bdevs": 4, 00:33:13.556 "num_base_bdevs_discovered": 2, 00:33:13.556 "num_base_bdevs_operational": 4, 00:33:13.556 "base_bdevs_list": [ 00:33:13.556 { 00:33:13.556 "name": "BaseBdev1", 00:33:13.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.556 "is_configured": false, 00:33:13.556 "data_offset": 0, 00:33:13.556 "data_size": 0 00:33:13.556 }, 00:33:13.556 { 00:33:13.556 "name": null, 00:33:13.556 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:13.556 "is_configured": false, 00:33:13.556 "data_offset": 0, 00:33:13.556 "data_size": 63488 00:33:13.556 }, 00:33:13.556 { 00:33:13.556 "name": "BaseBdev3", 00:33:13.556 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:13.556 "is_configured": true, 00:33:13.556 "data_offset": 2048, 00:33:13.556 "data_size": 63488 00:33:13.556 }, 00:33:13.556 { 00:33:13.556 "name": "BaseBdev4", 00:33:13.556 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:13.556 "is_configured": true, 00:33:13.556 "data_offset": 2048, 00:33:13.556 "data_size": 63488 00:33:13.556 } 00:33:13.556 ] 00:33:13.556 }' 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:13.556 14:02:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.814 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:13.814 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:13.815 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.815 [2024-10-09 14:02:20.363935] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:14.072 BaseBdev1 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.072 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 [ 00:33:14.072 { 00:33:14.072 "name": "BaseBdev1", 00:33:14.072 "aliases": [ 00:33:14.072 "1db4b7c5-eb8d-4e42-994f-2808eb165e94" 00:33:14.072 ], 00:33:14.072 "product_name": "Malloc disk", 00:33:14.072 "block_size": 512, 00:33:14.072 "num_blocks": 65536, 00:33:14.073 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:14.073 "assigned_rate_limits": { 00:33:14.073 "rw_ios_per_sec": 0, 00:33:14.073 "rw_mbytes_per_sec": 0, 00:33:14.073 "r_mbytes_per_sec": 0, 00:33:14.073 "w_mbytes_per_sec": 0 00:33:14.073 }, 00:33:14.073 "claimed": true, 00:33:14.073 "claim_type": "exclusive_write", 00:33:14.073 "zoned": false, 00:33:14.073 "supported_io_types": { 00:33:14.073 "read": true, 00:33:14.073 "write": true, 00:33:14.073 "unmap": true, 00:33:14.073 "flush": true, 00:33:14.073 "reset": true, 00:33:14.073 "nvme_admin": false, 00:33:14.073 "nvme_io": false, 00:33:14.073 "nvme_io_md": false, 00:33:14.073 "write_zeroes": true, 00:33:14.073 "zcopy": true, 00:33:14.073 "get_zone_info": false, 00:33:14.073 "zone_management": false, 00:33:14.073 "zone_append": false, 00:33:14.073 "compare": false, 00:33:14.073 "compare_and_write": false, 00:33:14.073 "abort": true, 00:33:14.073 "seek_hole": false, 00:33:14.073 "seek_data": false, 00:33:14.073 "copy": true, 00:33:14.073 "nvme_iov_md": false 00:33:14.073 }, 00:33:14.073 "memory_domains": [ 00:33:14.073 { 00:33:14.073 "dma_device_id": "system", 00:33:14.073 "dma_device_type": 1 00:33:14.073 }, 00:33:14.073 { 00:33:14.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.073 "dma_device_type": 2 00:33:14.073 } 00:33:14.073 ], 00:33:14.073 "driver_specific": {} 00:33:14.073 } 00:33:14.073 ] 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.073 "name": "Existed_Raid", 00:33:14.073 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:14.073 "strip_size_kb": 0, 00:33:14.073 "state": "configuring", 00:33:14.073 "raid_level": "raid1", 00:33:14.073 "superblock": true, 00:33:14.073 "num_base_bdevs": 4, 00:33:14.073 "num_base_bdevs_discovered": 3, 00:33:14.073 "num_base_bdevs_operational": 4, 00:33:14.073 "base_bdevs_list": [ 00:33:14.073 { 00:33:14.073 "name": "BaseBdev1", 00:33:14.073 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:14.073 "is_configured": true, 00:33:14.073 "data_offset": 2048, 00:33:14.073 "data_size": 63488 00:33:14.073 }, 00:33:14.073 { 00:33:14.073 "name": null, 00:33:14.073 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:14.073 "is_configured": false, 00:33:14.073 "data_offset": 0, 00:33:14.073 "data_size": 63488 00:33:14.073 }, 00:33:14.073 { 00:33:14.073 "name": "BaseBdev3", 00:33:14.073 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:14.073 "is_configured": true, 00:33:14.073 "data_offset": 2048, 00:33:14.073 "data_size": 63488 00:33:14.073 }, 00:33:14.073 { 00:33:14.073 "name": "BaseBdev4", 00:33:14.073 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:14.073 "is_configured": true, 00:33:14.073 "data_offset": 2048, 00:33:14.073 "data_size": 63488 00:33:14.073 } 00:33:14.073 ] 00:33:14.073 }' 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.073 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.330 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.330 [2024-10-09 14:02:20.876116] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.588 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:14.588 "name": "Existed_Raid", 00:33:14.588 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:14.588 "strip_size_kb": 0, 00:33:14.588 "state": "configuring", 00:33:14.588 "raid_level": "raid1", 00:33:14.588 "superblock": true, 00:33:14.588 "num_base_bdevs": 4, 00:33:14.588 "num_base_bdevs_discovered": 2, 00:33:14.588 "num_base_bdevs_operational": 4, 00:33:14.588 "base_bdevs_list": [ 00:33:14.588 { 00:33:14.588 "name": "BaseBdev1", 00:33:14.588 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:14.588 "is_configured": true, 00:33:14.588 "data_offset": 2048, 00:33:14.588 "data_size": 63488 00:33:14.588 }, 00:33:14.588 { 00:33:14.588 "name": null, 00:33:14.589 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:14.589 "is_configured": false, 00:33:14.589 "data_offset": 0, 00:33:14.589 "data_size": 63488 00:33:14.589 }, 00:33:14.589 { 00:33:14.589 "name": null, 00:33:14.589 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:14.589 "is_configured": false, 00:33:14.589 "data_offset": 0, 00:33:14.589 "data_size": 63488 00:33:14.589 }, 00:33:14.589 { 00:33:14.589 "name": "BaseBdev4", 00:33:14.589 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:14.589 "is_configured": true, 00:33:14.589 "data_offset": 2048, 00:33:14.589 "data_size": 63488 00:33:14.589 } 00:33:14.589 ] 00:33:14.589 }' 00:33:14.589 14:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:14.589 14:02:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.875 [2024-10-09 14:02:21.368270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.875 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.134 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.134 "name": "Existed_Raid", 00:33:15.134 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:15.134 "strip_size_kb": 0, 00:33:15.134 "state": "configuring", 00:33:15.134 "raid_level": "raid1", 00:33:15.134 "superblock": true, 00:33:15.134 "num_base_bdevs": 4, 00:33:15.134 "num_base_bdevs_discovered": 3, 00:33:15.134 "num_base_bdevs_operational": 4, 00:33:15.134 "base_bdevs_list": [ 00:33:15.134 { 00:33:15.134 "name": "BaseBdev1", 00:33:15.134 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:15.134 "is_configured": true, 00:33:15.134 "data_offset": 2048, 00:33:15.134 "data_size": 63488 00:33:15.134 }, 00:33:15.134 { 00:33:15.134 "name": null, 00:33:15.134 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:15.134 "is_configured": false, 00:33:15.134 "data_offset": 0, 00:33:15.134 "data_size": 63488 00:33:15.134 }, 00:33:15.134 { 00:33:15.134 "name": "BaseBdev3", 00:33:15.134 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:15.134 "is_configured": true, 00:33:15.134 "data_offset": 2048, 00:33:15.134 "data_size": 63488 00:33:15.134 }, 00:33:15.134 { 00:33:15.134 "name": "BaseBdev4", 00:33:15.134 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:15.134 "is_configured": true, 00:33:15.134 "data_offset": 2048, 00:33:15.134 "data_size": 63488 00:33:15.134 } 00:33:15.134 ] 00:33:15.134 }' 00:33:15.134 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.134 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.392 [2024-10-09 14:02:21.856419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.392 "name": "Existed_Raid", 00:33:15.392 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:15.392 "strip_size_kb": 0, 00:33:15.392 "state": "configuring", 00:33:15.392 "raid_level": "raid1", 00:33:15.392 "superblock": true, 00:33:15.392 "num_base_bdevs": 4, 00:33:15.392 "num_base_bdevs_discovered": 2, 00:33:15.392 "num_base_bdevs_operational": 4, 00:33:15.392 "base_bdevs_list": [ 00:33:15.392 { 00:33:15.392 "name": null, 00:33:15.392 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:15.392 "is_configured": false, 00:33:15.392 "data_offset": 0, 00:33:15.392 "data_size": 63488 00:33:15.392 }, 00:33:15.392 { 00:33:15.392 "name": null, 00:33:15.392 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:15.392 "is_configured": false, 00:33:15.392 "data_offset": 0, 00:33:15.392 "data_size": 63488 00:33:15.392 }, 00:33:15.392 { 00:33:15.392 "name": "BaseBdev3", 00:33:15.392 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:15.392 "is_configured": true, 00:33:15.392 "data_offset": 2048, 00:33:15.392 "data_size": 63488 00:33:15.392 }, 00:33:15.392 { 00:33:15.392 "name": "BaseBdev4", 00:33:15.392 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:15.392 "is_configured": true, 00:33:15.392 "data_offset": 2048, 00:33:15.392 "data_size": 63488 00:33:15.392 } 00:33:15.392 ] 00:33:15.392 }' 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.392 14:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.958 [2024-10-09 14:02:22.371317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:15.958 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:15.959 "name": "Existed_Raid", 00:33:15.959 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:15.959 "strip_size_kb": 0, 00:33:15.959 "state": "configuring", 00:33:15.959 "raid_level": "raid1", 00:33:15.959 "superblock": true, 00:33:15.959 "num_base_bdevs": 4, 00:33:15.959 "num_base_bdevs_discovered": 3, 00:33:15.959 "num_base_bdevs_operational": 4, 00:33:15.959 "base_bdevs_list": [ 00:33:15.959 { 00:33:15.959 "name": null, 00:33:15.959 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:15.959 "is_configured": false, 00:33:15.959 "data_offset": 0, 00:33:15.959 "data_size": 63488 00:33:15.959 }, 00:33:15.959 { 00:33:15.959 "name": "BaseBdev2", 00:33:15.959 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:15.959 "is_configured": true, 00:33:15.959 "data_offset": 2048, 00:33:15.959 "data_size": 63488 00:33:15.959 }, 00:33:15.959 { 00:33:15.959 "name": "BaseBdev3", 00:33:15.959 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:15.959 "is_configured": true, 00:33:15.959 "data_offset": 2048, 00:33:15.959 "data_size": 63488 00:33:15.959 }, 00:33:15.959 { 00:33:15.959 "name": "BaseBdev4", 00:33:15.959 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:15.959 "is_configured": true, 00:33:15.959 "data_offset": 2048, 00:33:15.959 "data_size": 63488 00:33:15.959 } 00:33:15.959 ] 00:33:15.959 }' 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:15.959 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1db4b7c5-eb8d-4e42-994f-2808eb165e94 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 [2024-10-09 14:02:22.950784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:16.526 [2024-10-09 14:02:22.951008] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:33:16.526 [2024-10-09 14:02:22.951034] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:16.526 NewBaseBdev 00:33:16.526 [2024-10-09 14:02:22.951310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:16.526 [2024-10-09 14:02:22.951451] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:33:16.526 [2024-10-09 14:02:22.951469] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:33:16.526 [2024-10-09 14:02:22.951598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.526 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.526 [ 00:33:16.526 { 00:33:16.526 "name": "NewBaseBdev", 00:33:16.526 "aliases": [ 00:33:16.526 "1db4b7c5-eb8d-4e42-994f-2808eb165e94" 00:33:16.526 ], 00:33:16.526 "product_name": "Malloc disk", 00:33:16.526 "block_size": 512, 00:33:16.526 "num_blocks": 65536, 00:33:16.526 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:16.526 "assigned_rate_limits": { 00:33:16.526 "rw_ios_per_sec": 0, 00:33:16.526 "rw_mbytes_per_sec": 0, 00:33:16.526 "r_mbytes_per_sec": 0, 00:33:16.526 "w_mbytes_per_sec": 0 00:33:16.526 }, 00:33:16.526 "claimed": true, 00:33:16.526 "claim_type": "exclusive_write", 00:33:16.526 "zoned": false, 00:33:16.526 "supported_io_types": { 00:33:16.526 "read": true, 00:33:16.526 "write": true, 00:33:16.526 "unmap": true, 00:33:16.527 "flush": true, 00:33:16.527 "reset": true, 00:33:16.527 "nvme_admin": false, 00:33:16.527 "nvme_io": false, 00:33:16.527 "nvme_io_md": false, 00:33:16.527 "write_zeroes": true, 00:33:16.527 "zcopy": true, 00:33:16.527 "get_zone_info": false, 00:33:16.527 "zone_management": false, 00:33:16.527 "zone_append": false, 00:33:16.527 "compare": false, 00:33:16.527 "compare_and_write": false, 00:33:16.527 "abort": true, 00:33:16.527 "seek_hole": false, 00:33:16.527 "seek_data": false, 00:33:16.527 "copy": true, 00:33:16.527 "nvme_iov_md": false 00:33:16.527 }, 00:33:16.527 "memory_domains": [ 00:33:16.527 { 00:33:16.527 "dma_device_id": "system", 00:33:16.527 "dma_device_type": 1 00:33:16.527 }, 00:33:16.527 { 00:33:16.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.527 "dma_device_type": 2 00:33:16.527 } 00:33:16.527 ], 00:33:16.527 "driver_specific": {} 00:33:16.527 } 00:33:16.527 ] 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.527 14:02:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.527 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.527 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.527 "name": "Existed_Raid", 00:33:16.527 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:16.527 "strip_size_kb": 0, 00:33:16.527 "state": "online", 00:33:16.527 "raid_level": "raid1", 00:33:16.527 "superblock": true, 00:33:16.527 "num_base_bdevs": 4, 00:33:16.527 "num_base_bdevs_discovered": 4, 00:33:16.527 "num_base_bdevs_operational": 4, 00:33:16.527 "base_bdevs_list": [ 00:33:16.527 { 00:33:16.527 "name": "NewBaseBdev", 00:33:16.527 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:16.527 "is_configured": true, 00:33:16.527 "data_offset": 2048, 00:33:16.527 "data_size": 63488 00:33:16.527 }, 00:33:16.527 { 00:33:16.527 "name": "BaseBdev2", 00:33:16.527 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:16.527 "is_configured": true, 00:33:16.527 "data_offset": 2048, 00:33:16.527 "data_size": 63488 00:33:16.527 }, 00:33:16.527 { 00:33:16.527 "name": "BaseBdev3", 00:33:16.527 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:16.527 "is_configured": true, 00:33:16.527 "data_offset": 2048, 00:33:16.527 "data_size": 63488 00:33:16.527 }, 00:33:16.527 { 00:33:16.527 "name": "BaseBdev4", 00:33:16.527 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:16.527 "is_configured": true, 00:33:16.527 "data_offset": 2048, 00:33:16.527 "data_size": 63488 00:33:16.527 } 00:33:16.527 ] 00:33:16.527 }' 00:33:16.527 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.527 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.095 [2024-10-09 14:02:23.459291] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.095 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:17.095 "name": "Existed_Raid", 00:33:17.095 "aliases": [ 00:33:17.095 "1a326393-8b65-4c58-9198-dfee998ea4c3" 00:33:17.095 ], 00:33:17.095 "product_name": "Raid Volume", 00:33:17.095 "block_size": 512, 00:33:17.095 "num_blocks": 63488, 00:33:17.095 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:17.095 "assigned_rate_limits": { 00:33:17.095 "rw_ios_per_sec": 0, 00:33:17.095 "rw_mbytes_per_sec": 0, 00:33:17.095 "r_mbytes_per_sec": 0, 00:33:17.095 "w_mbytes_per_sec": 0 00:33:17.095 }, 00:33:17.095 "claimed": false, 00:33:17.095 "zoned": false, 00:33:17.095 "supported_io_types": { 00:33:17.095 "read": true, 00:33:17.095 "write": true, 00:33:17.095 "unmap": false, 00:33:17.095 "flush": false, 00:33:17.095 "reset": true, 00:33:17.095 "nvme_admin": false, 00:33:17.095 "nvme_io": false, 00:33:17.095 "nvme_io_md": false, 00:33:17.095 "write_zeroes": true, 00:33:17.095 "zcopy": false, 00:33:17.095 "get_zone_info": false, 00:33:17.095 "zone_management": false, 00:33:17.095 "zone_append": false, 00:33:17.095 "compare": false, 00:33:17.096 "compare_and_write": false, 00:33:17.096 "abort": false, 00:33:17.096 "seek_hole": false, 00:33:17.096 "seek_data": false, 00:33:17.096 "copy": false, 00:33:17.096 "nvme_iov_md": false 00:33:17.096 }, 00:33:17.096 "memory_domains": [ 00:33:17.096 { 00:33:17.096 "dma_device_id": "system", 00:33:17.096 "dma_device_type": 1 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.096 "dma_device_type": 2 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "system", 00:33:17.096 "dma_device_type": 1 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.096 "dma_device_type": 2 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "system", 00:33:17.096 "dma_device_type": 1 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.096 "dma_device_type": 2 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "system", 00:33:17.096 "dma_device_type": 1 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:17.096 "dma_device_type": 2 00:33:17.096 } 00:33:17.096 ], 00:33:17.096 "driver_specific": { 00:33:17.096 "raid": { 00:33:17.096 "uuid": "1a326393-8b65-4c58-9198-dfee998ea4c3", 00:33:17.096 "strip_size_kb": 0, 00:33:17.096 "state": "online", 00:33:17.096 "raid_level": "raid1", 00:33:17.096 "superblock": true, 00:33:17.096 "num_base_bdevs": 4, 00:33:17.096 "num_base_bdevs_discovered": 4, 00:33:17.096 "num_base_bdevs_operational": 4, 00:33:17.096 "base_bdevs_list": [ 00:33:17.096 { 00:33:17.096 "name": "NewBaseBdev", 00:33:17.096 "uuid": "1db4b7c5-eb8d-4e42-994f-2808eb165e94", 00:33:17.096 "is_configured": true, 00:33:17.096 "data_offset": 2048, 00:33:17.096 "data_size": 63488 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "name": "BaseBdev2", 00:33:17.096 "uuid": "ee76f0c5-835a-4c61-aaf7-da6d1f9643bf", 00:33:17.096 "is_configured": true, 00:33:17.096 "data_offset": 2048, 00:33:17.096 "data_size": 63488 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "name": "BaseBdev3", 00:33:17.096 "uuid": "120b3620-14c5-4183-ba82-7890dda42980", 00:33:17.096 "is_configured": true, 00:33:17.096 "data_offset": 2048, 00:33:17.096 "data_size": 63488 00:33:17.096 }, 00:33:17.096 { 00:33:17.096 "name": "BaseBdev4", 00:33:17.096 "uuid": "c5d1c232-7caa-4bbe-aa7b-576e945f8976", 00:33:17.096 "is_configured": true, 00:33:17.096 "data_offset": 2048, 00:33:17.096 "data_size": 63488 00:33:17.096 } 00:33:17.096 ] 00:33:17.096 } 00:33:17.096 } 00:33:17.096 }' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:17.096 BaseBdev2 00:33:17.096 BaseBdev3 00:33:17.096 BaseBdev4' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.096 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.355 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.356 [2024-10-09 14:02:23.779056] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:17.356 [2024-10-09 14:02:23.779088] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:17.356 [2024-10-09 14:02:23.779176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:17.356 [2024-10-09 14:02:23.779457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:17.356 [2024-10-09 14:02:23.779479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84998 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84998 ']' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84998 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84998 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:17.356 killing process with pid 84998 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84998' 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84998 00:33:17.356 [2024-10-09 14:02:23.824428] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:17.356 14:02:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84998 00:33:17.356 [2024-10-09 14:02:23.865276] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:17.614 14:02:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:33:17.614 00:33:17.614 real 0m9.731s 00:33:17.614 user 0m16.785s 00:33:17.614 sys 0m2.075s 00:33:17.614 14:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:17.614 14:02:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.614 ************************************ 00:33:17.614 END TEST raid_state_function_test_sb 00:33:17.614 ************************************ 00:33:17.872 14:02:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:33:17.872 14:02:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:17.872 14:02:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:17.872 14:02:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:17.872 ************************************ 00:33:17.872 START TEST raid_superblock_test 00:33:17.872 ************************************ 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85653 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85653 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85653 ']' 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:33:17.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:17.872 14:02:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:17.872 [2024-10-09 14:02:24.260486] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:17.872 [2024-10-09 14:02:24.261211] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85653 ] 00:33:17.872 [2024-10-09 14:02:24.421731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.131 [2024-10-09 14:02:24.466853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.131 [2024-10-09 14:02:24.510457] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:18.131 [2024-10-09 14:02:24.510499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.066 malloc1 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.066 [2024-10-09 14:02:25.270691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:19.066 [2024-10-09 14:02:25.271133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.066 [2024-10-09 14:02:25.271168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:19.066 [2024-10-09 14:02:25.271187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.066 [2024-10-09 14:02:25.273698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.066 [2024-10-09 14:02:25.273741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:19.066 pt1 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:33:19.066 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 malloc2 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 [2024-10-09 14:02:25.309964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:19.067 [2024-10-09 14:02:25.310039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.067 [2024-10-09 14:02:25.310067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:19.067 [2024-10-09 14:02:25.310088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.067 [2024-10-09 14:02:25.312604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.067 [2024-10-09 14:02:25.312642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:19.067 pt2 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 malloc3 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 [2024-10-09 14:02:25.338916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:19.067 [2024-10-09 14:02:25.338967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.067 [2024-10-09 14:02:25.338987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:19.067 [2024-10-09 14:02:25.339001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.067 [2024-10-09 14:02:25.341457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.067 [2024-10-09 14:02:25.341496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:19.067 pt3 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 malloc4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 [2024-10-09 14:02:25.367808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:19.067 [2024-10-09 14:02:25.367871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.067 [2024-10-09 14:02:25.367888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:19.067 [2024-10-09 14:02:25.367905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.067 [2024-10-09 14:02:25.370346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.067 [2024-10-09 14:02:25.370387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:19.067 pt4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 [2024-10-09 14:02:25.379905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:19.067 [2024-10-09 14:02:25.382134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:19.067 [2024-10-09 14:02:25.382199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:19.067 [2024-10-09 14:02:25.382242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:19.067 [2024-10-09 14:02:25.382399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:33:19.067 [2024-10-09 14:02:25.382414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:19.067 [2024-10-09 14:02:25.382708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:19.067 [2024-10-09 14:02:25.382873] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:33:19.067 [2024-10-09 14:02:25.382885] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:33:19.067 [2024-10-09 14:02:25.382994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.067 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:19.067 "name": "raid_bdev1", 00:33:19.067 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:19.067 "strip_size_kb": 0, 00:33:19.067 "state": "online", 00:33:19.067 "raid_level": "raid1", 00:33:19.067 "superblock": true, 00:33:19.067 "num_base_bdevs": 4, 00:33:19.067 "num_base_bdevs_discovered": 4, 00:33:19.067 "num_base_bdevs_operational": 4, 00:33:19.067 "base_bdevs_list": [ 00:33:19.067 { 00:33:19.067 "name": "pt1", 00:33:19.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:19.067 "is_configured": true, 00:33:19.067 "data_offset": 2048, 00:33:19.067 "data_size": 63488 00:33:19.067 }, 00:33:19.067 { 00:33:19.067 "name": "pt2", 00:33:19.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:19.067 "is_configured": true, 00:33:19.067 "data_offset": 2048, 00:33:19.067 "data_size": 63488 00:33:19.067 }, 00:33:19.067 { 00:33:19.067 "name": "pt3", 00:33:19.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:19.067 "is_configured": true, 00:33:19.067 "data_offset": 2048, 00:33:19.067 "data_size": 63488 00:33:19.068 }, 00:33:19.068 { 00:33:19.068 "name": "pt4", 00:33:19.068 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:19.068 "is_configured": true, 00:33:19.068 "data_offset": 2048, 00:33:19.068 "data_size": 63488 00:33:19.068 } 00:33:19.068 ] 00:33:19.068 }' 00:33:19.068 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:19.068 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:19.326 [2024-10-09 14:02:25.808263] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.326 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:19.326 "name": "raid_bdev1", 00:33:19.326 "aliases": [ 00:33:19.326 "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407" 00:33:19.326 ], 00:33:19.326 "product_name": "Raid Volume", 00:33:19.326 "block_size": 512, 00:33:19.326 "num_blocks": 63488, 00:33:19.326 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:19.326 "assigned_rate_limits": { 00:33:19.326 "rw_ios_per_sec": 0, 00:33:19.326 "rw_mbytes_per_sec": 0, 00:33:19.326 "r_mbytes_per_sec": 0, 00:33:19.326 "w_mbytes_per_sec": 0 00:33:19.326 }, 00:33:19.326 "claimed": false, 00:33:19.326 "zoned": false, 00:33:19.326 "supported_io_types": { 00:33:19.326 "read": true, 00:33:19.326 "write": true, 00:33:19.326 "unmap": false, 00:33:19.326 "flush": false, 00:33:19.326 "reset": true, 00:33:19.326 "nvme_admin": false, 00:33:19.326 "nvme_io": false, 00:33:19.326 "nvme_io_md": false, 00:33:19.326 "write_zeroes": true, 00:33:19.326 "zcopy": false, 00:33:19.326 "get_zone_info": false, 00:33:19.326 "zone_management": false, 00:33:19.326 "zone_append": false, 00:33:19.326 "compare": false, 00:33:19.326 "compare_and_write": false, 00:33:19.326 "abort": false, 00:33:19.326 "seek_hole": false, 00:33:19.326 "seek_data": false, 00:33:19.326 "copy": false, 00:33:19.326 "nvme_iov_md": false 00:33:19.326 }, 00:33:19.326 "memory_domains": [ 00:33:19.326 { 00:33:19.326 "dma_device_id": "system", 00:33:19.326 "dma_device_type": 1 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.326 "dma_device_type": 2 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "system", 00:33:19.326 "dma_device_type": 1 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.326 "dma_device_type": 2 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "system", 00:33:19.326 "dma_device_type": 1 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.326 "dma_device_type": 2 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "system", 00:33:19.326 "dma_device_type": 1 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.326 "dma_device_type": 2 00:33:19.326 } 00:33:19.326 ], 00:33:19.326 "driver_specific": { 00:33:19.326 "raid": { 00:33:19.326 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:19.326 "strip_size_kb": 0, 00:33:19.326 "state": "online", 00:33:19.326 "raid_level": "raid1", 00:33:19.326 "superblock": true, 00:33:19.326 "num_base_bdevs": 4, 00:33:19.326 "num_base_bdevs_discovered": 4, 00:33:19.326 "num_base_bdevs_operational": 4, 00:33:19.326 "base_bdevs_list": [ 00:33:19.326 { 00:33:19.326 "name": "pt1", 00:33:19.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:19.326 "is_configured": true, 00:33:19.326 "data_offset": 2048, 00:33:19.326 "data_size": 63488 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "name": "pt2", 00:33:19.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:19.326 "is_configured": true, 00:33:19.326 "data_offset": 2048, 00:33:19.326 "data_size": 63488 00:33:19.326 }, 00:33:19.326 { 00:33:19.326 "name": "pt3", 00:33:19.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:19.326 "is_configured": true, 00:33:19.326 "data_offset": 2048, 00:33:19.326 "data_size": 63488 00:33:19.326 }, 00:33:19.326 { 00:33:19.327 "name": "pt4", 00:33:19.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:19.327 "is_configured": true, 00:33:19.327 "data_offset": 2048, 00:33:19.327 "data_size": 63488 00:33:19.327 } 00:33:19.327 ] 00:33:19.327 } 00:33:19.327 } 00:33:19.327 }' 00:33:19.327 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:19.584 pt2 00:33:19.584 pt3 00:33:19.584 pt4' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 14:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.584 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.585 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:33:19.843 [2024-10-09 14:02:26.140279] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 ']' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 [2024-10-09 14:02:26.183990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:19.843 [2024-10-09 14:02:26.184026] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:19.843 [2024-10-09 14:02:26.184091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:19.843 [2024-10-09 14:02:26.184191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:19.843 [2024-10-09 14:02:26.184214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 [2024-10-09 14:02:26.328059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:19.843 [2024-10-09 14:02:26.330273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:19.843 [2024-10-09 14:02:26.330330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:19.843 [2024-10-09 14:02:26.330361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:33:19.843 [2024-10-09 14:02:26.330408] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:19.843 [2024-10-09 14:02:26.330452] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:19.843 [2024-10-09 14:02:26.330475] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:19.843 [2024-10-09 14:02:26.330494] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:33:19.843 [2024-10-09 14:02:26.330512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:19.843 [2024-10-09 14:02:26.330522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:33:19.843 request: 00:33:19.843 { 00:33:19.843 "name": "raid_bdev1", 00:33:19.843 "raid_level": "raid1", 00:33:19.843 "base_bdevs": [ 00:33:19.843 "malloc1", 00:33:19.843 "malloc2", 00:33:19.843 "malloc3", 00:33:19.843 "malloc4" 00:33:19.843 ], 00:33:19.843 "superblock": false, 00:33:19.843 "method": "bdev_raid_create", 00:33:19.843 "req_id": 1 00:33:19.843 } 00:33:19.843 Got JSON-RPC error response 00:33:19.843 response: 00:33:19.843 { 00:33:19.843 "code": -17, 00:33:19.843 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:19.843 } 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:33:19.843 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:33:19.844 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:19.844 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:19.844 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:19.844 [2024-10-09 14:02:26.388018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:19.844 [2024-10-09 14:02:26.388173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:19.844 [2024-10-09 14:02:26.388294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:19.844 [2024-10-09 14:02:26.388383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:19.844 [2024-10-09 14:02:26.391029] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:19.844 [2024-10-09 14:02:26.391161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:19.844 [2024-10-09 14:02:26.391302] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:19.844 [2024-10-09 14:02:26.391373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:19.844 pt1 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.101 "name": "raid_bdev1", 00:33:20.101 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:20.101 "strip_size_kb": 0, 00:33:20.101 "state": "configuring", 00:33:20.101 "raid_level": "raid1", 00:33:20.101 "superblock": true, 00:33:20.101 "num_base_bdevs": 4, 00:33:20.101 "num_base_bdevs_discovered": 1, 00:33:20.101 "num_base_bdevs_operational": 4, 00:33:20.101 "base_bdevs_list": [ 00:33:20.101 { 00:33:20.101 "name": "pt1", 00:33:20.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:20.101 "is_configured": true, 00:33:20.101 "data_offset": 2048, 00:33:20.101 "data_size": 63488 00:33:20.101 }, 00:33:20.101 { 00:33:20.101 "name": null, 00:33:20.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:20.101 "is_configured": false, 00:33:20.101 "data_offset": 2048, 00:33:20.101 "data_size": 63488 00:33:20.101 }, 00:33:20.101 { 00:33:20.101 "name": null, 00:33:20.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:20.101 "is_configured": false, 00:33:20.101 "data_offset": 2048, 00:33:20.101 "data_size": 63488 00:33:20.101 }, 00:33:20.101 { 00:33:20.101 "name": null, 00:33:20.101 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:20.101 "is_configured": false, 00:33:20.101 "data_offset": 2048, 00:33:20.101 "data_size": 63488 00:33:20.101 } 00:33:20.101 ] 00:33:20.101 }' 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.101 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.360 [2024-10-09 14:02:26.844169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:20.360 [2024-10-09 14:02:26.844240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.360 [2024-10-09 14:02:26.844266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:20.360 [2024-10-09 14:02:26.844279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.360 [2024-10-09 14:02:26.844712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.360 [2024-10-09 14:02:26.844732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:20.360 [2024-10-09 14:02:26.844809] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:20.360 [2024-10-09 14:02:26.844838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:20.360 pt2 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.360 [2024-10-09 14:02:26.852164] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.360 "name": "raid_bdev1", 00:33:20.360 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:20.360 "strip_size_kb": 0, 00:33:20.360 "state": "configuring", 00:33:20.360 "raid_level": "raid1", 00:33:20.360 "superblock": true, 00:33:20.360 "num_base_bdevs": 4, 00:33:20.360 "num_base_bdevs_discovered": 1, 00:33:20.360 "num_base_bdevs_operational": 4, 00:33:20.360 "base_bdevs_list": [ 00:33:20.360 { 00:33:20.360 "name": "pt1", 00:33:20.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:20.360 "is_configured": true, 00:33:20.360 "data_offset": 2048, 00:33:20.360 "data_size": 63488 00:33:20.360 }, 00:33:20.360 { 00:33:20.360 "name": null, 00:33:20.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:20.360 "is_configured": false, 00:33:20.360 "data_offset": 0, 00:33:20.360 "data_size": 63488 00:33:20.360 }, 00:33:20.360 { 00:33:20.360 "name": null, 00:33:20.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:20.360 "is_configured": false, 00:33:20.360 "data_offset": 2048, 00:33:20.360 "data_size": 63488 00:33:20.360 }, 00:33:20.360 { 00:33:20.360 "name": null, 00:33:20.360 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:20.360 "is_configured": false, 00:33:20.360 "data_offset": 2048, 00:33:20.360 "data_size": 63488 00:33:20.360 } 00:33:20.360 ] 00:33:20.360 }' 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.360 14:02:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.925 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:33:20.925 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:20.925 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:20.925 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.925 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.925 [2024-10-09 14:02:27.304263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:20.925 [2024-10-09 14:02:27.304476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.926 [2024-10-09 14:02:27.304507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:20.926 [2024-10-09 14:02:27.304521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.926 [2024-10-09 14:02:27.304938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.926 [2024-10-09 14:02:27.304962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:20.926 [2024-10-09 14:02:27.305038] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:20.926 [2024-10-09 14:02:27.305063] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:20.926 pt2 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.926 [2024-10-09 14:02:27.312207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:20.926 [2024-10-09 14:02:27.312282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.926 [2024-10-09 14:02:27.312302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:20.926 [2024-10-09 14:02:27.312316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.926 [2024-10-09 14:02:27.312681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.926 [2024-10-09 14:02:27.312703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:20.926 [2024-10-09 14:02:27.312763] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:20.926 [2024-10-09 14:02:27.312785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:20.926 pt3 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.926 [2024-10-09 14:02:27.320227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:20.926 [2024-10-09 14:02:27.320281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:20.926 [2024-10-09 14:02:27.320298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:20.926 [2024-10-09 14:02:27.320311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:20.926 [2024-10-09 14:02:27.320681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:20.926 [2024-10-09 14:02:27.320715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:20.926 [2024-10-09 14:02:27.320783] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:20.926 [2024-10-09 14:02:27.320809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:20.926 [2024-10-09 14:02:27.320913] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:33:20.926 [2024-10-09 14:02:27.320927] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:20.926 [2024-10-09 14:02:27.321180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:20.926 [2024-10-09 14:02:27.321299] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:33:20.926 [2024-10-09 14:02:27.321310] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:33:20.926 [2024-10-09 14:02:27.321410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.926 pt4 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:20.926 "name": "raid_bdev1", 00:33:20.926 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:20.926 "strip_size_kb": 0, 00:33:20.926 "state": "online", 00:33:20.926 "raid_level": "raid1", 00:33:20.926 "superblock": true, 00:33:20.926 "num_base_bdevs": 4, 00:33:20.926 "num_base_bdevs_discovered": 4, 00:33:20.926 "num_base_bdevs_operational": 4, 00:33:20.926 "base_bdevs_list": [ 00:33:20.926 { 00:33:20.926 "name": "pt1", 00:33:20.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:20.926 "is_configured": true, 00:33:20.926 "data_offset": 2048, 00:33:20.926 "data_size": 63488 00:33:20.926 }, 00:33:20.926 { 00:33:20.926 "name": "pt2", 00:33:20.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:20.926 "is_configured": true, 00:33:20.926 "data_offset": 2048, 00:33:20.926 "data_size": 63488 00:33:20.926 }, 00:33:20.926 { 00:33:20.926 "name": "pt3", 00:33:20.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:20.926 "is_configured": true, 00:33:20.926 "data_offset": 2048, 00:33:20.926 "data_size": 63488 00:33:20.926 }, 00:33:20.926 { 00:33:20.926 "name": "pt4", 00:33:20.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:20.926 "is_configured": true, 00:33:20.926 "data_offset": 2048, 00:33:20.926 "data_size": 63488 00:33:20.926 } 00:33:20.926 ] 00:33:20.926 }' 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:20.926 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.505 [2024-10-09 14:02:27.800680] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.505 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:21.505 "name": "raid_bdev1", 00:33:21.505 "aliases": [ 00:33:21.505 "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407" 00:33:21.505 ], 00:33:21.505 "product_name": "Raid Volume", 00:33:21.505 "block_size": 512, 00:33:21.505 "num_blocks": 63488, 00:33:21.505 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:21.505 "assigned_rate_limits": { 00:33:21.505 "rw_ios_per_sec": 0, 00:33:21.505 "rw_mbytes_per_sec": 0, 00:33:21.505 "r_mbytes_per_sec": 0, 00:33:21.505 "w_mbytes_per_sec": 0 00:33:21.505 }, 00:33:21.505 "claimed": false, 00:33:21.505 "zoned": false, 00:33:21.505 "supported_io_types": { 00:33:21.505 "read": true, 00:33:21.505 "write": true, 00:33:21.505 "unmap": false, 00:33:21.505 "flush": false, 00:33:21.505 "reset": true, 00:33:21.505 "nvme_admin": false, 00:33:21.505 "nvme_io": false, 00:33:21.505 "nvme_io_md": false, 00:33:21.505 "write_zeroes": true, 00:33:21.505 "zcopy": false, 00:33:21.505 "get_zone_info": false, 00:33:21.505 "zone_management": false, 00:33:21.505 "zone_append": false, 00:33:21.505 "compare": false, 00:33:21.505 "compare_and_write": false, 00:33:21.505 "abort": false, 00:33:21.505 "seek_hole": false, 00:33:21.505 "seek_data": false, 00:33:21.505 "copy": false, 00:33:21.505 "nvme_iov_md": false 00:33:21.505 }, 00:33:21.505 "memory_domains": [ 00:33:21.505 { 00:33:21.505 "dma_device_id": "system", 00:33:21.505 "dma_device_type": 1 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.505 "dma_device_type": 2 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "system", 00:33:21.505 "dma_device_type": 1 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.505 "dma_device_type": 2 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "system", 00:33:21.505 "dma_device_type": 1 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.505 "dma_device_type": 2 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "system", 00:33:21.505 "dma_device_type": 1 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.505 "dma_device_type": 2 00:33:21.505 } 00:33:21.505 ], 00:33:21.505 "driver_specific": { 00:33:21.505 "raid": { 00:33:21.505 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:21.505 "strip_size_kb": 0, 00:33:21.505 "state": "online", 00:33:21.505 "raid_level": "raid1", 00:33:21.505 "superblock": true, 00:33:21.505 "num_base_bdevs": 4, 00:33:21.505 "num_base_bdevs_discovered": 4, 00:33:21.505 "num_base_bdevs_operational": 4, 00:33:21.505 "base_bdevs_list": [ 00:33:21.505 { 00:33:21.505 "name": "pt1", 00:33:21.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:21.505 "is_configured": true, 00:33:21.505 "data_offset": 2048, 00:33:21.505 "data_size": 63488 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "name": "pt2", 00:33:21.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:21.505 "is_configured": true, 00:33:21.505 "data_offset": 2048, 00:33:21.505 "data_size": 63488 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "name": "pt3", 00:33:21.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:21.505 "is_configured": true, 00:33:21.505 "data_offset": 2048, 00:33:21.505 "data_size": 63488 00:33:21.505 }, 00:33:21.505 { 00:33:21.505 "name": "pt4", 00:33:21.505 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:21.505 "is_configured": true, 00:33:21.505 "data_offset": 2048, 00:33:21.506 "data_size": 63488 00:33:21.506 } 00:33:21.506 ] 00:33:21.506 } 00:33:21.506 } 00:33:21.506 }' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:33:21.506 pt2 00:33:21.506 pt3 00:33:21.506 pt4' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.506 14:02:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.506 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.766 [2024-10-09 14:02:28.120726] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 '!=' 981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 ']' 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.766 [2024-10-09 14:02:28.164454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:21.766 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:21.767 "name": "raid_bdev1", 00:33:21.767 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:21.767 "strip_size_kb": 0, 00:33:21.767 "state": "online", 00:33:21.767 "raid_level": "raid1", 00:33:21.767 "superblock": true, 00:33:21.767 "num_base_bdevs": 4, 00:33:21.767 "num_base_bdevs_discovered": 3, 00:33:21.767 "num_base_bdevs_operational": 3, 00:33:21.767 "base_bdevs_list": [ 00:33:21.767 { 00:33:21.767 "name": null, 00:33:21.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.767 "is_configured": false, 00:33:21.767 "data_offset": 0, 00:33:21.767 "data_size": 63488 00:33:21.767 }, 00:33:21.767 { 00:33:21.767 "name": "pt2", 00:33:21.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:21.767 "is_configured": true, 00:33:21.767 "data_offset": 2048, 00:33:21.767 "data_size": 63488 00:33:21.767 }, 00:33:21.767 { 00:33:21.767 "name": "pt3", 00:33:21.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:21.767 "is_configured": true, 00:33:21.767 "data_offset": 2048, 00:33:21.767 "data_size": 63488 00:33:21.767 }, 00:33:21.767 { 00:33:21.767 "name": "pt4", 00:33:21.767 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:21.767 "is_configured": true, 00:33:21.767 "data_offset": 2048, 00:33:21.767 "data_size": 63488 00:33:21.767 } 00:33:21.767 ] 00:33:21.767 }' 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:21.767 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 [2024-10-09 14:02:28.604487] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:22.333 [2024-10-09 14:02:28.604663] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:22.333 [2024-10-09 14:02:28.604766] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:22.333 [2024-10-09 14:02:28.604839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:22.333 [2024-10-09 14:02:28.604854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 [2024-10-09 14:02:28.684482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:22.333 [2024-10-09 14:02:28.684562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:22.333 [2024-10-09 14:02:28.684594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:22.333 [2024-10-09 14:02:28.684610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:22.333 [2024-10-09 14:02:28.687254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:22.333 [2024-10-09 14:02:28.687288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:22.333 [2024-10-09 14:02:28.687357] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:22.333 [2024-10-09 14:02:28.687394] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:22.333 pt2 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.333 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:22.333 "name": "raid_bdev1", 00:33:22.333 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:22.333 "strip_size_kb": 0, 00:33:22.333 "state": "configuring", 00:33:22.333 "raid_level": "raid1", 00:33:22.333 "superblock": true, 00:33:22.333 "num_base_bdevs": 4, 00:33:22.333 "num_base_bdevs_discovered": 1, 00:33:22.333 "num_base_bdevs_operational": 3, 00:33:22.333 "base_bdevs_list": [ 00:33:22.333 { 00:33:22.333 "name": null, 00:33:22.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.333 "is_configured": false, 00:33:22.333 "data_offset": 2048, 00:33:22.333 "data_size": 63488 00:33:22.333 }, 00:33:22.333 { 00:33:22.333 "name": "pt2", 00:33:22.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:22.333 "is_configured": true, 00:33:22.333 "data_offset": 2048, 00:33:22.333 "data_size": 63488 00:33:22.333 }, 00:33:22.333 { 00:33:22.333 "name": null, 00:33:22.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:22.334 "is_configured": false, 00:33:22.334 "data_offset": 2048, 00:33:22.334 "data_size": 63488 00:33:22.334 }, 00:33:22.334 { 00:33:22.334 "name": null, 00:33:22.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:22.334 "is_configured": false, 00:33:22.334 "data_offset": 2048, 00:33:22.334 "data_size": 63488 00:33:22.334 } 00:33:22.334 ] 00:33:22.334 }' 00:33:22.334 14:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:22.334 14:02:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.591 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:33:22.591 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:22.591 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:22.591 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.591 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.850 [2024-10-09 14:02:29.140664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:22.850 [2024-10-09 14:02:29.140728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:22.850 [2024-10-09 14:02:29.140751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:22.850 [2024-10-09 14:02:29.140784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:22.850 [2024-10-09 14:02:29.141258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:22.850 [2024-10-09 14:02:29.141283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:22.850 [2024-10-09 14:02:29.141362] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:22.850 [2024-10-09 14:02:29.141389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:22.850 pt3 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:22.850 "name": "raid_bdev1", 00:33:22.850 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:22.850 "strip_size_kb": 0, 00:33:22.850 "state": "configuring", 00:33:22.850 "raid_level": "raid1", 00:33:22.850 "superblock": true, 00:33:22.850 "num_base_bdevs": 4, 00:33:22.850 "num_base_bdevs_discovered": 2, 00:33:22.850 "num_base_bdevs_operational": 3, 00:33:22.850 "base_bdevs_list": [ 00:33:22.850 { 00:33:22.850 "name": null, 00:33:22.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.850 "is_configured": false, 00:33:22.850 "data_offset": 2048, 00:33:22.850 "data_size": 63488 00:33:22.850 }, 00:33:22.850 { 00:33:22.850 "name": "pt2", 00:33:22.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:22.850 "is_configured": true, 00:33:22.850 "data_offset": 2048, 00:33:22.850 "data_size": 63488 00:33:22.850 }, 00:33:22.850 { 00:33:22.850 "name": "pt3", 00:33:22.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:22.850 "is_configured": true, 00:33:22.850 "data_offset": 2048, 00:33:22.850 "data_size": 63488 00:33:22.850 }, 00:33:22.850 { 00:33:22.850 "name": null, 00:33:22.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:22.850 "is_configured": false, 00:33:22.850 "data_offset": 2048, 00:33:22.850 "data_size": 63488 00:33:22.850 } 00:33:22.850 ] 00:33:22.850 }' 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:22.850 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.108 [2024-10-09 14:02:29.612729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:23.108 [2024-10-09 14:02:29.612802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.108 [2024-10-09 14:02:29.612826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:23.108 [2024-10-09 14:02:29.612841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.108 [2024-10-09 14:02:29.613245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.108 [2024-10-09 14:02:29.613267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:23.108 [2024-10-09 14:02:29.613346] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:23.108 [2024-10-09 14:02:29.613380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:23.108 [2024-10-09 14:02:29.613480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:33:23.108 [2024-10-09 14:02:29.613493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:23.108 [2024-10-09 14:02:29.613801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:23.108 [2024-10-09 14:02:29.613935] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:33:23.108 [2024-10-09 14:02:29.613946] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:33:23.108 [2024-10-09 14:02:29.614062] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.108 pt4 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.108 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.366 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.366 "name": "raid_bdev1", 00:33:23.366 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:23.366 "strip_size_kb": 0, 00:33:23.366 "state": "online", 00:33:23.366 "raid_level": "raid1", 00:33:23.366 "superblock": true, 00:33:23.366 "num_base_bdevs": 4, 00:33:23.366 "num_base_bdevs_discovered": 3, 00:33:23.366 "num_base_bdevs_operational": 3, 00:33:23.366 "base_bdevs_list": [ 00:33:23.366 { 00:33:23.366 "name": null, 00:33:23.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.366 "is_configured": false, 00:33:23.366 "data_offset": 2048, 00:33:23.366 "data_size": 63488 00:33:23.367 }, 00:33:23.367 { 00:33:23.367 "name": "pt2", 00:33:23.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:23.367 "is_configured": true, 00:33:23.367 "data_offset": 2048, 00:33:23.367 "data_size": 63488 00:33:23.367 }, 00:33:23.367 { 00:33:23.367 "name": "pt3", 00:33:23.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:23.367 "is_configured": true, 00:33:23.367 "data_offset": 2048, 00:33:23.367 "data_size": 63488 00:33:23.367 }, 00:33:23.367 { 00:33:23.367 "name": "pt4", 00:33:23.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:23.367 "is_configured": true, 00:33:23.367 "data_offset": 2048, 00:33:23.367 "data_size": 63488 00:33:23.367 } 00:33:23.367 ] 00:33:23.367 }' 00:33:23.367 14:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.367 14:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.624 [2024-10-09 14:02:30.068844] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:23.624 [2024-10-09 14:02:30.068882] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:23.624 [2024-10-09 14:02:30.068959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:23.624 [2024-10-09 14:02:30.069035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:23.624 [2024-10-09 14:02:30.069047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.624 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.624 [2024-10-09 14:02:30.128869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:23.624 [2024-10-09 14:02:30.128929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:23.624 [2024-10-09 14:02:30.128956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:23.624 [2024-10-09 14:02:30.128968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:23.624 [2024-10-09 14:02:30.131645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:23.624 [2024-10-09 14:02:30.131680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:23.624 [2024-10-09 14:02:30.131753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:23.625 [2024-10-09 14:02:30.131792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:23.625 [2024-10-09 14:02:30.131898] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:23.625 [2024-10-09 14:02:30.131923] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:23.625 [2024-10-09 14:02:30.131941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:33:23.625 [2024-10-09 14:02:30.131979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:23.625 [2024-10-09 14:02:30.132073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:23.625 pt1 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:23.625 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.883 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.883 "name": "raid_bdev1", 00:33:23.883 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:23.883 "strip_size_kb": 0, 00:33:23.883 "state": "configuring", 00:33:23.883 "raid_level": "raid1", 00:33:23.883 "superblock": true, 00:33:23.883 "num_base_bdevs": 4, 00:33:23.883 "num_base_bdevs_discovered": 2, 00:33:23.883 "num_base_bdevs_operational": 3, 00:33:23.883 "base_bdevs_list": [ 00:33:23.883 { 00:33:23.883 "name": null, 00:33:23.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.883 "is_configured": false, 00:33:23.883 "data_offset": 2048, 00:33:23.883 "data_size": 63488 00:33:23.883 }, 00:33:23.883 { 00:33:23.883 "name": "pt2", 00:33:23.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:23.883 "is_configured": true, 00:33:23.883 "data_offset": 2048, 00:33:23.883 "data_size": 63488 00:33:23.883 }, 00:33:23.883 { 00:33:23.883 "name": "pt3", 00:33:23.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:23.883 "is_configured": true, 00:33:23.883 "data_offset": 2048, 00:33:23.883 "data_size": 63488 00:33:23.883 }, 00:33:23.883 { 00:33:23.883 "name": null, 00:33:23.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:23.883 "is_configured": false, 00:33:23.883 "data_offset": 2048, 00:33:23.883 "data_size": 63488 00:33:23.883 } 00:33:23.883 ] 00:33:23.883 }' 00:33:23.883 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.883 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.141 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.141 [2024-10-09 14:02:30.584942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:24.141 [2024-10-09 14:02:30.585003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.141 [2024-10-09 14:02:30.585027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:33:24.141 [2024-10-09 14:02:30.585042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.141 [2024-10-09 14:02:30.585472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.141 [2024-10-09 14:02:30.585494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:24.141 [2024-10-09 14:02:30.585581] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:24.142 [2024-10-09 14:02:30.585609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:24.142 [2024-10-09 14:02:30.585739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:33:24.142 [2024-10-09 14:02:30.585757] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:24.142 [2024-10-09 14:02:30.586026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:24.142 [2024-10-09 14:02:30.586144] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:33:24.142 [2024-10-09 14:02:30.586170] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:33:24.142 [2024-10-09 14:02:30.586286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.142 pt4 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:24.142 "name": "raid_bdev1", 00:33:24.142 "uuid": "981f0476-ac86-4a9c-a8a3-ba7bf7ab4407", 00:33:24.142 "strip_size_kb": 0, 00:33:24.142 "state": "online", 00:33:24.142 "raid_level": "raid1", 00:33:24.142 "superblock": true, 00:33:24.142 "num_base_bdevs": 4, 00:33:24.142 "num_base_bdevs_discovered": 3, 00:33:24.142 "num_base_bdevs_operational": 3, 00:33:24.142 "base_bdevs_list": [ 00:33:24.142 { 00:33:24.142 "name": null, 00:33:24.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.142 "is_configured": false, 00:33:24.142 "data_offset": 2048, 00:33:24.142 "data_size": 63488 00:33:24.142 }, 00:33:24.142 { 00:33:24.142 "name": "pt2", 00:33:24.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.142 "is_configured": true, 00:33:24.142 "data_offset": 2048, 00:33:24.142 "data_size": 63488 00:33:24.142 }, 00:33:24.142 { 00:33:24.142 "name": "pt3", 00:33:24.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:33:24.142 "is_configured": true, 00:33:24.142 "data_offset": 2048, 00:33:24.142 "data_size": 63488 00:33:24.142 }, 00:33:24.142 { 00:33:24.142 "name": "pt4", 00:33:24.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:33:24.142 "is_configured": true, 00:33:24.142 "data_offset": 2048, 00:33:24.142 "data_size": 63488 00:33:24.142 } 00:33:24.142 ] 00:33:24.142 }' 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:24.142 14:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.708 [2024-10-09 14:02:31.085326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:24.708 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 '!=' 981f0476-ac86-4a9c-a8a3-ba7bf7ab4407 ']' 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85653 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85653 ']' 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85653 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85653 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:24.709 killing process with pid 85653 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85653' 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85653 00:33:24.709 [2024-10-09 14:02:31.156299] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:24.709 [2024-10-09 14:02:31.156388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:24.709 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85653 00:33:24.709 [2024-10-09 14:02:31.156465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:24.709 [2024-10-09 14:02:31.156477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:33:24.709 [2024-10-09 14:02:31.202780] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:24.967 14:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:33:24.967 00:33:24.967 real 0m7.268s 00:33:24.967 user 0m12.396s 00:33:24.967 sys 0m1.556s 00:33:24.967 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.967 14:02:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.967 ************************************ 00:33:24.967 END TEST raid_superblock_test 00:33:24.967 ************************************ 00:33:24.967 14:02:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:33:24.967 14:02:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:24.967 14:02:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.967 14:02:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:24.967 ************************************ 00:33:24.967 START TEST raid_read_error_test 00:33:24.967 ************************************ 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:24.967 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pdNWuvhD1f 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86131 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86131 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 86131 ']' 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.226 14:02:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:25.226 [2024-10-09 14:02:31.611493] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:25.226 [2024-10-09 14:02:31.611646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86131 ] 00:33:25.226 [2024-10-09 14:02:31.772612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.483 [2024-10-09 14:02:31.818875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.483 [2024-10-09 14:02:31.862145] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.483 [2024-10-09 14:02:31.862182] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.049 BaseBdev1_malloc 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.049 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.050 true 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.050 [2024-10-09 14:02:32.546339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:26.050 [2024-10-09 14:02:32.546394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.050 [2024-10-09 14:02:32.546418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:26.050 [2024-10-09 14:02:32.546431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.050 [2024-10-09 14:02:32.548981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.050 [2024-10-09 14:02:32.549018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:26.050 BaseBdev1 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.050 BaseBdev2_malloc 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.050 true 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.050 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.050 [2024-10-09 14:02:32.595806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:26.050 [2024-10-09 14:02:32.595854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.050 [2024-10-09 14:02:32.595876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:26.050 [2024-10-09 14:02:32.595887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.050 [2024-10-09 14:02:32.598375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.050 [2024-10-09 14:02:32.598412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:26.308 BaseBdev2 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 BaseBdev3_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 true 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 [2024-10-09 14:02:32.637316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:26.308 [2024-10-09 14:02:32.637377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.308 [2024-10-09 14:02:32.637400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:26.308 [2024-10-09 14:02:32.637412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.308 [2024-10-09 14:02:32.639868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.308 [2024-10-09 14:02:32.639902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:26.308 BaseBdev3 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 BaseBdev4_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 true 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 [2024-10-09 14:02:32.674466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:33:26.308 [2024-10-09 14:02:32.674513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.308 [2024-10-09 14:02:32.674538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:26.308 [2024-10-09 14:02:32.674562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.308 [2024-10-09 14:02:32.676981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.308 [2024-10-09 14:02:32.677016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:26.308 BaseBdev4 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.308 [2024-10-09 14:02:32.686522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:26.308 [2024-10-09 14:02:32.688758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:26.308 [2024-10-09 14:02:32.688848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:26.308 [2024-10-09 14:02:32.688900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:26.308 [2024-10-09 14:02:32.689111] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:33:26.308 [2024-10-09 14:02:32.689124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:26.308 [2024-10-09 14:02:32.689395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:26.308 [2024-10-09 14:02:32.689577] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:33:26.308 [2024-10-09 14:02:32.689605] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:33:26.308 [2024-10-09 14:02:32.689740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:26.308 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:26.309 "name": "raid_bdev1", 00:33:26.309 "uuid": "c8657c6e-456d-474d-b14e-b8aafdbf75ea", 00:33:26.309 "strip_size_kb": 0, 00:33:26.309 "state": "online", 00:33:26.309 "raid_level": "raid1", 00:33:26.309 "superblock": true, 00:33:26.309 "num_base_bdevs": 4, 00:33:26.309 "num_base_bdevs_discovered": 4, 00:33:26.309 "num_base_bdevs_operational": 4, 00:33:26.309 "base_bdevs_list": [ 00:33:26.309 { 00:33:26.309 "name": "BaseBdev1", 00:33:26.309 "uuid": "95975f96-011b-56f0-bacb-7789b57f05b5", 00:33:26.309 "is_configured": true, 00:33:26.309 "data_offset": 2048, 00:33:26.309 "data_size": 63488 00:33:26.309 }, 00:33:26.309 { 00:33:26.309 "name": "BaseBdev2", 00:33:26.309 "uuid": "bac39dad-afe3-5cc8-b6b6-50ec56644abb", 00:33:26.309 "is_configured": true, 00:33:26.309 "data_offset": 2048, 00:33:26.309 "data_size": 63488 00:33:26.309 }, 00:33:26.309 { 00:33:26.309 "name": "BaseBdev3", 00:33:26.309 "uuid": "20486b21-9b39-5398-a5b6-4593b83d6c17", 00:33:26.309 "is_configured": true, 00:33:26.309 "data_offset": 2048, 00:33:26.309 "data_size": 63488 00:33:26.309 }, 00:33:26.309 { 00:33:26.309 "name": "BaseBdev4", 00:33:26.309 "uuid": "eaac5756-004c-5b71-a088-63257425b7dd", 00:33:26.309 "is_configured": true, 00:33:26.309 "data_offset": 2048, 00:33:26.309 "data_size": 63488 00:33:26.309 } 00:33:26.309 ] 00:33:26.309 }' 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:26.309 14:02:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:26.566 14:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:26.566 14:02:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:26.825 [2024-10-09 14:02:33.130986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:27.760 "name": "raid_bdev1", 00:33:27.760 "uuid": "c8657c6e-456d-474d-b14e-b8aafdbf75ea", 00:33:27.760 "strip_size_kb": 0, 00:33:27.760 "state": "online", 00:33:27.760 "raid_level": "raid1", 00:33:27.760 "superblock": true, 00:33:27.760 "num_base_bdevs": 4, 00:33:27.760 "num_base_bdevs_discovered": 4, 00:33:27.760 "num_base_bdevs_operational": 4, 00:33:27.760 "base_bdevs_list": [ 00:33:27.760 { 00:33:27.760 "name": "BaseBdev1", 00:33:27.760 "uuid": "95975f96-011b-56f0-bacb-7789b57f05b5", 00:33:27.760 "is_configured": true, 00:33:27.760 "data_offset": 2048, 00:33:27.760 "data_size": 63488 00:33:27.760 }, 00:33:27.760 { 00:33:27.760 "name": "BaseBdev2", 00:33:27.760 "uuid": "bac39dad-afe3-5cc8-b6b6-50ec56644abb", 00:33:27.760 "is_configured": true, 00:33:27.760 "data_offset": 2048, 00:33:27.760 "data_size": 63488 00:33:27.760 }, 00:33:27.760 { 00:33:27.760 "name": "BaseBdev3", 00:33:27.760 "uuid": "20486b21-9b39-5398-a5b6-4593b83d6c17", 00:33:27.760 "is_configured": true, 00:33:27.760 "data_offset": 2048, 00:33:27.760 "data_size": 63488 00:33:27.760 }, 00:33:27.760 { 00:33:27.760 "name": "BaseBdev4", 00:33:27.760 "uuid": "eaac5756-004c-5b71-a088-63257425b7dd", 00:33:27.760 "is_configured": true, 00:33:27.760 "data_offset": 2048, 00:33:27.760 "data_size": 63488 00:33:27.760 } 00:33:27.760 ] 00:33:27.760 }' 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:27.760 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.019 [2024-10-09 14:02:34.501997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:28.019 [2024-10-09 14:02:34.502034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:28.019 [2024-10-09 14:02:34.504572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:28.019 [2024-10-09 14:02:34.504626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:28.019 [2024-10-09 14:02:34.504748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:28.019 [2024-10-09 14:02:34.504760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:33:28.019 { 00:33:28.019 "results": [ 00:33:28.019 { 00:33:28.019 "job": "raid_bdev1", 00:33:28.019 "core_mask": "0x1", 00:33:28.019 "workload": "randrw", 00:33:28.019 "percentage": 50, 00:33:28.019 "status": "finished", 00:33:28.019 "queue_depth": 1, 00:33:28.019 "io_size": 131072, 00:33:28.019 "runtime": 1.368797, 00:33:28.019 "iops": 11583.894470838262, 00:33:28.019 "mibps": 1447.9868088547828, 00:33:28.019 "io_failed": 0, 00:33:28.019 "io_timeout": 0, 00:33:28.019 "avg_latency_us": 83.726513862861, 00:33:28.019 "min_latency_us": 22.674285714285713, 00:33:28.019 "max_latency_us": 1568.182857142857 00:33:28.019 } 00:33:28.019 ], 00:33:28.019 "core_count": 1 00:33:28.019 } 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86131 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 86131 ']' 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 86131 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86131 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:28.019 killing process with pid 86131 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86131' 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 86131 00:33:28.019 [2024-10-09 14:02:34.552152] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:28.019 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 86131 00:33:28.277 [2024-10-09 14:02:34.588755] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pdNWuvhD1f 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:28.537 00:33:28.537 real 0m3.325s 00:33:28.537 user 0m4.084s 00:33:28.537 sys 0m0.630s 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.537 14:02:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.537 ************************************ 00:33:28.537 END TEST raid_read_error_test 00:33:28.537 ************************************ 00:33:28.537 14:02:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:33:28.537 14:02:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:28.537 14:02:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.537 14:02:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.537 ************************************ 00:33:28.537 START TEST raid_write_error_test 00:33:28.537 ************************************ 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xdenPPDUof 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86260 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86260 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86260 ']' 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:28.537 14:02:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.537 [2024-10-09 14:02:35.030064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:28.537 [2024-10-09 14:02:35.030948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86260 ] 00:33:28.796 [2024-10-09 14:02:35.207896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.796 [2024-10-09 14:02:35.251138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.796 [2024-10-09 14:02:35.294162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:28.796 [2024-10-09 14:02:35.294199] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 BaseBdev1_malloc 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 true 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.361 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.362 [2024-10-09 14:02:35.906129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:33:29.362 [2024-10-09 14:02:35.906182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.362 [2024-10-09 14:02:35.906204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:29.362 [2024-10-09 14:02:35.906216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.362 [2024-10-09 14:02:35.908772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.362 [2024-10-09 14:02:35.908812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:29.362 BaseBdev1 00:33:29.362 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.362 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 BaseBdev2_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 true 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 [2024-10-09 14:02:35.950353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:33:29.620 [2024-10-09 14:02:35.950421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.620 [2024-10-09 14:02:35.950443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:33:29.620 [2024-10-09 14:02:35.950454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.620 [2024-10-09 14:02:35.952869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.620 [2024-10-09 14:02:35.952906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:29.620 BaseBdev2 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 BaseBdev3_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 true 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 [2024-10-09 14:02:35.979411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:33:29.620 [2024-10-09 14:02:35.979459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.620 [2024-10-09 14:02:35.979479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:33:29.620 [2024-10-09 14:02:35.979490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.620 [2024-10-09 14:02:35.981985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.620 [2024-10-09 14:02:35.982023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:29.620 BaseBdev3 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 BaseBdev4_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 true 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 [2024-10-09 14:02:36.008482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:33:29.620 [2024-10-09 14:02:36.008528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.620 [2024-10-09 14:02:36.008563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:33:29.620 [2024-10-09 14:02:36.008576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.620 [2024-10-09 14:02:36.011003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.620 [2024-10-09 14:02:36.011039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:29.620 BaseBdev4 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 [2024-10-09 14:02:36.016544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:29.620 [2024-10-09 14:02:36.018790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:29.620 [2024-10-09 14:02:36.018894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:29.620 [2024-10-09 14:02:36.018957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:29.620 [2024-10-09 14:02:36.019165] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:33:29.620 [2024-10-09 14:02:36.019177] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:29.620 [2024-10-09 14:02:36.019451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:29.620 [2024-10-09 14:02:36.019608] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:33:29.620 [2024-10-09 14:02:36.019623] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:33:29.620 [2024-10-09 14:02:36.019750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.620 "name": "raid_bdev1", 00:33:29.620 "uuid": "36246ff0-20f2-4791-81f8-272a778f8de1", 00:33:29.620 "strip_size_kb": 0, 00:33:29.620 "state": "online", 00:33:29.620 "raid_level": "raid1", 00:33:29.620 "superblock": true, 00:33:29.620 "num_base_bdevs": 4, 00:33:29.620 "num_base_bdevs_discovered": 4, 00:33:29.620 "num_base_bdevs_operational": 4, 00:33:29.620 "base_bdevs_list": [ 00:33:29.620 { 00:33:29.620 "name": "BaseBdev1", 00:33:29.620 "uuid": "8a880c0a-544a-53db-9682-b4c35de057eb", 00:33:29.620 "is_configured": true, 00:33:29.620 "data_offset": 2048, 00:33:29.620 "data_size": 63488 00:33:29.620 }, 00:33:29.620 { 00:33:29.620 "name": "BaseBdev2", 00:33:29.620 "uuid": "5717a3ef-1b44-538d-a1ef-9bff814ae134", 00:33:29.620 "is_configured": true, 00:33:29.620 "data_offset": 2048, 00:33:29.620 "data_size": 63488 00:33:29.620 }, 00:33:29.620 { 00:33:29.620 "name": "BaseBdev3", 00:33:29.620 "uuid": "6099c06e-8a32-5905-8bb0-0fe08a33154b", 00:33:29.620 "is_configured": true, 00:33:29.620 "data_offset": 2048, 00:33:29.620 "data_size": 63488 00:33:29.620 }, 00:33:29.620 { 00:33:29.620 "name": "BaseBdev4", 00:33:29.620 "uuid": "4dac3acd-45bb-55e0-8916-403244cf3844", 00:33:29.620 "is_configured": true, 00:33:29.620 "data_offset": 2048, 00:33:29.620 "data_size": 63488 00:33:29.620 } 00:33:29.620 ] 00:33:29.620 }' 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.620 14:02:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.185 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:33:30.185 14:02:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:30.185 [2024-10-09 14:02:36.561001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.120 [2024-10-09 14:02:37.453431] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:33:31.120 [2024-10-09 14:02:37.453489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:31.120 [2024-10-09 14:02:37.453722] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:31.120 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.121 "name": "raid_bdev1", 00:33:31.121 "uuid": "36246ff0-20f2-4791-81f8-272a778f8de1", 00:33:31.121 "strip_size_kb": 0, 00:33:31.121 "state": "online", 00:33:31.121 "raid_level": "raid1", 00:33:31.121 "superblock": true, 00:33:31.121 "num_base_bdevs": 4, 00:33:31.121 "num_base_bdevs_discovered": 3, 00:33:31.121 "num_base_bdevs_operational": 3, 00:33:31.121 "base_bdevs_list": [ 00:33:31.121 { 00:33:31.121 "name": null, 00:33:31.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.121 "is_configured": false, 00:33:31.121 "data_offset": 0, 00:33:31.121 "data_size": 63488 00:33:31.121 }, 00:33:31.121 { 00:33:31.121 "name": "BaseBdev2", 00:33:31.121 "uuid": "5717a3ef-1b44-538d-a1ef-9bff814ae134", 00:33:31.121 "is_configured": true, 00:33:31.121 "data_offset": 2048, 00:33:31.121 "data_size": 63488 00:33:31.121 }, 00:33:31.121 { 00:33:31.121 "name": "BaseBdev3", 00:33:31.121 "uuid": "6099c06e-8a32-5905-8bb0-0fe08a33154b", 00:33:31.121 "is_configured": true, 00:33:31.121 "data_offset": 2048, 00:33:31.121 "data_size": 63488 00:33:31.121 }, 00:33:31.121 { 00:33:31.121 "name": "BaseBdev4", 00:33:31.121 "uuid": "4dac3acd-45bb-55e0-8916-403244cf3844", 00:33:31.121 "is_configured": true, 00:33:31.121 "data_offset": 2048, 00:33:31.121 "data_size": 63488 00:33:31.121 } 00:33:31.121 ] 00:33:31.121 }' 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.121 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.387 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:31.387 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.387 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.387 [2024-10-09 14:02:37.918474] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:31.387 [2024-10-09 14:02:37.918525] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:31.387 [2024-10-09 14:02:37.921398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:31.387 [2024-10-09 14:02:37.921482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.387 [2024-10-09 14:02:37.921645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:31.388 [2024-10-09 14:02:37.921698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:33:31.388 { 00:33:31.388 "results": [ 00:33:31.388 { 00:33:31.388 "job": "raid_bdev1", 00:33:31.388 "core_mask": "0x1", 00:33:31.388 "workload": "randrw", 00:33:31.388 "percentage": 50, 00:33:31.388 "status": "finished", 00:33:31.388 "queue_depth": 1, 00:33:31.388 "io_size": 131072, 00:33:31.388 "runtime": 1.355316, 00:33:31.388 "iops": 12469.416726431327, 00:33:31.388 "mibps": 1558.677090803916, 00:33:31.388 "io_failed": 0, 00:33:31.388 "io_timeout": 0, 00:33:31.388 "avg_latency_us": 77.55139498450268, 00:33:31.388 "min_latency_us": 22.674285714285713, 00:33:31.388 "max_latency_us": 1466.7580952380952 00:33:31.388 } 00:33:31.388 ], 00:33:31.388 "core_count": 1 00:33:31.388 } 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86260 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86260 ']' 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86260 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:33:31.388 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86260 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:31.646 killing process with pid 86260 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86260' 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86260 00:33:31.646 [2024-10-09 14:02:37.964308] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:31.646 14:02:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86260 00:33:31.646 [2024-10-09 14:02:38.007808] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xdenPPDUof 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:33:31.906 00:33:31.906 real 0m3.377s 00:33:31.906 user 0m4.236s 00:33:31.906 sys 0m0.629s 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:31.906 14:02:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 ************************************ 00:33:31.906 END TEST raid_write_error_test 00:33:31.906 ************************************ 00:33:31.906 14:02:38 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:33:31.906 14:02:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:33:31.906 14:02:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:33:31.906 14:02:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:31.906 14:02:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:31.906 14:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 ************************************ 00:33:31.906 START TEST raid_rebuild_test 00:33:31.906 ************************************ 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86393 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86393 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86393 ']' 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:31.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:31.906 14:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.166 [2024-10-09 14:02:38.457603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:32.166 [2024-10-09 14:02:38.457813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86393 ] 00:33:32.166 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:32.166 Zero copy mechanism will not be used. 00:33:32.166 [2024-10-09 14:02:38.637820] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.166 [2024-10-09 14:02:38.681600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.424 [2024-10-09 14:02:38.726037] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:32.424 [2024-10-09 14:02:38.726073] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.990 BaseBdev1_malloc 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.990 [2024-10-09 14:02:39.338822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:32.990 [2024-10-09 14:02:39.338910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.990 [2024-10-09 14:02:39.338940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:32.990 [2024-10-09 14:02:39.338961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.990 [2024-10-09 14:02:39.341422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.990 [2024-10-09 14:02:39.341460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:32.990 BaseBdev1 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.990 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 BaseBdev2_malloc 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 [2024-10-09 14:02:39.377322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:32.991 [2024-10-09 14:02:39.377382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.991 [2024-10-09 14:02:39.377409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:32.991 [2024-10-09 14:02:39.377423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.991 [2024-10-09 14:02:39.380201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.991 [2024-10-09 14:02:39.380246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:32.991 BaseBdev2 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 spare_malloc 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 spare_delay 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 [2024-10-09 14:02:39.410461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:32.991 [2024-10-09 14:02:39.410516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:32.991 [2024-10-09 14:02:39.410541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:32.991 [2024-10-09 14:02:39.410572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:32.991 [2024-10-09 14:02:39.413006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:32.991 [2024-10-09 14:02:39.413044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:32.991 spare 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 [2024-10-09 14:02:39.418504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:32.991 [2024-10-09 14:02:39.420684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:32.991 [2024-10-09 14:02:39.420788] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:33:32.991 [2024-10-09 14:02:39.420803] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:32.991 [2024-10-09 14:02:39.421073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:32.991 [2024-10-09 14:02:39.421194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:33:32.991 [2024-10-09 14:02:39.421207] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:33:32.991 [2024-10-09 14:02:39.421329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:32.991 "name": "raid_bdev1", 00:33:32.991 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:32.991 "strip_size_kb": 0, 00:33:32.991 "state": "online", 00:33:32.991 "raid_level": "raid1", 00:33:32.991 "superblock": false, 00:33:32.991 "num_base_bdevs": 2, 00:33:32.991 "num_base_bdevs_discovered": 2, 00:33:32.991 "num_base_bdevs_operational": 2, 00:33:32.991 "base_bdevs_list": [ 00:33:32.991 { 00:33:32.991 "name": "BaseBdev1", 00:33:32.991 "uuid": "31ccea02-325f-56dc-bbac-48c52aaf07aa", 00:33:32.991 "is_configured": true, 00:33:32.991 "data_offset": 0, 00:33:32.991 "data_size": 65536 00:33:32.991 }, 00:33:32.991 { 00:33:32.991 "name": "BaseBdev2", 00:33:32.991 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:32.991 "is_configured": true, 00:33:32.991 "data_offset": 0, 00:33:32.991 "data_size": 65536 00:33:32.991 } 00:33:32.991 ] 00:33:32.991 }' 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:32.991 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.558 [2024-10-09 14:02:39.878856] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:33.558 14:02:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:33.816 [2024-10-09 14:02:40.142723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:33.816 /dev/nbd0 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:33.816 1+0 records in 00:33:33.816 1+0 records out 00:33:33.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264215 s, 15.5 MB/s 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:33.816 14:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:33:39.085 65536+0 records in 00:33:39.085 65536+0 records out 00:33:39.085 33554432 bytes (34 MB, 32 MiB) copied, 4.57564 s, 7.3 MB/s 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:39.085 14:02:44 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:39.085 [2024-10-09 14:02:45.020516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.085 [2024-10-09 14:02:45.052630] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:39.085 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:39.086 "name": "raid_bdev1", 00:33:39.086 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:39.086 "strip_size_kb": 0, 00:33:39.086 "state": "online", 00:33:39.086 "raid_level": "raid1", 00:33:39.086 "superblock": false, 00:33:39.086 "num_base_bdevs": 2, 00:33:39.086 "num_base_bdevs_discovered": 1, 00:33:39.086 "num_base_bdevs_operational": 1, 00:33:39.086 "base_bdevs_list": [ 00:33:39.086 { 00:33:39.086 "name": null, 00:33:39.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.086 "is_configured": false, 00:33:39.086 "data_offset": 0, 00:33:39.086 "data_size": 65536 00:33:39.086 }, 00:33:39.086 { 00:33:39.086 "name": "BaseBdev2", 00:33:39.086 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:39.086 "is_configured": true, 00:33:39.086 "data_offset": 0, 00:33:39.086 "data_size": 65536 00:33:39.086 } 00:33:39.086 ] 00:33:39.086 }' 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.086 [2024-10-09 14:02:45.456706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:39.086 [2024-10-09 14:02:45.460973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.086 14:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:39.086 [2024-10-09 14:02:45.463200] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.020 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:40.020 "name": "raid_bdev1", 00:33:40.021 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:40.021 "strip_size_kb": 0, 00:33:40.021 "state": "online", 00:33:40.021 "raid_level": "raid1", 00:33:40.021 "superblock": false, 00:33:40.021 "num_base_bdevs": 2, 00:33:40.021 "num_base_bdevs_discovered": 2, 00:33:40.021 "num_base_bdevs_operational": 2, 00:33:40.021 "process": { 00:33:40.021 "type": "rebuild", 00:33:40.021 "target": "spare", 00:33:40.021 "progress": { 00:33:40.021 "blocks": 20480, 00:33:40.021 "percent": 31 00:33:40.021 } 00:33:40.021 }, 00:33:40.021 "base_bdevs_list": [ 00:33:40.021 { 00:33:40.021 "name": "spare", 00:33:40.021 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:40.021 "is_configured": true, 00:33:40.021 "data_offset": 0, 00:33:40.021 "data_size": 65536 00:33:40.021 }, 00:33:40.021 { 00:33:40.021 "name": "BaseBdev2", 00:33:40.021 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:40.021 "is_configured": true, 00:33:40.021 "data_offset": 0, 00:33:40.021 "data_size": 65536 00:33:40.021 } 00:33:40.021 ] 00:33:40.021 }' 00:33:40.021 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:40.021 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:40.021 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.282 [2024-10-09 14:02:46.605974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:40.282 [2024-10-09 14:02:46.670546] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:40.282 [2024-10-09 14:02:46.670631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.282 [2024-10-09 14:02:46.670653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:40.282 [2024-10-09 14:02:46.670663] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.282 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.282 "name": "raid_bdev1", 00:33:40.282 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:40.282 "strip_size_kb": 0, 00:33:40.282 "state": "online", 00:33:40.282 "raid_level": "raid1", 00:33:40.282 "superblock": false, 00:33:40.282 "num_base_bdevs": 2, 00:33:40.282 "num_base_bdevs_discovered": 1, 00:33:40.282 "num_base_bdevs_operational": 1, 00:33:40.282 "base_bdevs_list": [ 00:33:40.282 { 00:33:40.282 "name": null, 00:33:40.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.282 "is_configured": false, 00:33:40.282 "data_offset": 0, 00:33:40.282 "data_size": 65536 00:33:40.282 }, 00:33:40.282 { 00:33:40.282 "name": "BaseBdev2", 00:33:40.282 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:40.282 "is_configured": true, 00:33:40.282 "data_offset": 0, 00:33:40.282 "data_size": 65536 00:33:40.282 } 00:33:40.282 ] 00:33:40.282 }' 00:33:40.283 14:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.283 14:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.895 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:40.896 "name": "raid_bdev1", 00:33:40.896 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:40.896 "strip_size_kb": 0, 00:33:40.896 "state": "online", 00:33:40.896 "raid_level": "raid1", 00:33:40.896 "superblock": false, 00:33:40.896 "num_base_bdevs": 2, 00:33:40.896 "num_base_bdevs_discovered": 1, 00:33:40.896 "num_base_bdevs_operational": 1, 00:33:40.896 "base_bdevs_list": [ 00:33:40.896 { 00:33:40.896 "name": null, 00:33:40.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.896 "is_configured": false, 00:33:40.896 "data_offset": 0, 00:33:40.896 "data_size": 65536 00:33:40.896 }, 00:33:40.896 { 00:33:40.896 "name": "BaseBdev2", 00:33:40.896 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:40.896 "is_configured": true, 00:33:40.896 "data_offset": 0, 00:33:40.896 "data_size": 65536 00:33:40.896 } 00:33:40.896 ] 00:33:40.896 }' 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.896 [2024-10-09 14:02:47.283544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:40.896 [2024-10-09 14:02:47.287733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.896 14:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:40.896 [2024-10-09 14:02:47.289951] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:41.832 "name": "raid_bdev1", 00:33:41.832 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:41.832 "strip_size_kb": 0, 00:33:41.832 "state": "online", 00:33:41.832 "raid_level": "raid1", 00:33:41.832 "superblock": false, 00:33:41.832 "num_base_bdevs": 2, 00:33:41.832 "num_base_bdevs_discovered": 2, 00:33:41.832 "num_base_bdevs_operational": 2, 00:33:41.832 "process": { 00:33:41.832 "type": "rebuild", 00:33:41.832 "target": "spare", 00:33:41.832 "progress": { 00:33:41.832 "blocks": 20480, 00:33:41.832 "percent": 31 00:33:41.832 } 00:33:41.832 }, 00:33:41.832 "base_bdevs_list": [ 00:33:41.832 { 00:33:41.832 "name": "spare", 00:33:41.832 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:41.832 "is_configured": true, 00:33:41.832 "data_offset": 0, 00:33:41.832 "data_size": 65536 00:33:41.832 }, 00:33:41.832 { 00:33:41.832 "name": "BaseBdev2", 00:33:41.832 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:41.832 "is_configured": true, 00:33:41.832 "data_offset": 0, 00:33:41.832 "data_size": 65536 00:33:41.832 } 00:33:41.832 ] 00:33:41.832 }' 00:33:41.832 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=302 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.091 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:42.091 "name": "raid_bdev1", 00:33:42.091 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:42.091 "strip_size_kb": 0, 00:33:42.091 "state": "online", 00:33:42.091 "raid_level": "raid1", 00:33:42.091 "superblock": false, 00:33:42.091 "num_base_bdevs": 2, 00:33:42.091 "num_base_bdevs_discovered": 2, 00:33:42.092 "num_base_bdevs_operational": 2, 00:33:42.092 "process": { 00:33:42.092 "type": "rebuild", 00:33:42.092 "target": "spare", 00:33:42.092 "progress": { 00:33:42.092 "blocks": 22528, 00:33:42.092 "percent": 34 00:33:42.092 } 00:33:42.092 }, 00:33:42.092 "base_bdevs_list": [ 00:33:42.092 { 00:33:42.092 "name": "spare", 00:33:42.092 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:42.092 "is_configured": true, 00:33:42.092 "data_offset": 0, 00:33:42.092 "data_size": 65536 00:33:42.092 }, 00:33:42.092 { 00:33:42.092 "name": "BaseBdev2", 00:33:42.092 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:42.092 "is_configured": true, 00:33:42.092 "data_offset": 0, 00:33:42.092 "data_size": 65536 00:33:42.092 } 00:33:42.092 ] 00:33:42.092 }' 00:33:42.092 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:42.092 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:42.092 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:42.092 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:42.092 14:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.027 14:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:43.285 "name": "raid_bdev1", 00:33:43.285 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:43.285 "strip_size_kb": 0, 00:33:43.285 "state": "online", 00:33:43.285 "raid_level": "raid1", 00:33:43.285 "superblock": false, 00:33:43.285 "num_base_bdevs": 2, 00:33:43.285 "num_base_bdevs_discovered": 2, 00:33:43.285 "num_base_bdevs_operational": 2, 00:33:43.285 "process": { 00:33:43.285 "type": "rebuild", 00:33:43.285 "target": "spare", 00:33:43.285 "progress": { 00:33:43.285 "blocks": 45056, 00:33:43.285 "percent": 68 00:33:43.285 } 00:33:43.285 }, 00:33:43.285 "base_bdevs_list": [ 00:33:43.285 { 00:33:43.285 "name": "spare", 00:33:43.285 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:43.285 "is_configured": true, 00:33:43.285 "data_offset": 0, 00:33:43.285 "data_size": 65536 00:33:43.285 }, 00:33:43.285 { 00:33:43.285 "name": "BaseBdev2", 00:33:43.285 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:43.285 "is_configured": true, 00:33:43.285 "data_offset": 0, 00:33:43.285 "data_size": 65536 00:33:43.285 } 00:33:43.285 ] 00:33:43.285 }' 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:43.285 14:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:44.223 [2024-10-09 14:02:50.507346] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:44.223 [2024-10-09 14:02:50.507418] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:44.223 [2024-10-09 14:02:50.507458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:44.223 "name": "raid_bdev1", 00:33:44.223 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:44.223 "strip_size_kb": 0, 00:33:44.223 "state": "online", 00:33:44.223 "raid_level": "raid1", 00:33:44.223 "superblock": false, 00:33:44.223 "num_base_bdevs": 2, 00:33:44.223 "num_base_bdevs_discovered": 2, 00:33:44.223 "num_base_bdevs_operational": 2, 00:33:44.223 "base_bdevs_list": [ 00:33:44.223 { 00:33:44.223 "name": "spare", 00:33:44.223 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:44.223 "is_configured": true, 00:33:44.223 "data_offset": 0, 00:33:44.223 "data_size": 65536 00:33:44.223 }, 00:33:44.223 { 00:33:44.223 "name": "BaseBdev2", 00:33:44.223 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:44.223 "is_configured": true, 00:33:44.223 "data_offset": 0, 00:33:44.223 "data_size": 65536 00:33:44.223 } 00:33:44.223 ] 00:33:44.223 }' 00:33:44.223 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:44.482 "name": "raid_bdev1", 00:33:44.482 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:44.482 "strip_size_kb": 0, 00:33:44.482 "state": "online", 00:33:44.482 "raid_level": "raid1", 00:33:44.482 "superblock": false, 00:33:44.482 "num_base_bdevs": 2, 00:33:44.482 "num_base_bdevs_discovered": 2, 00:33:44.482 "num_base_bdevs_operational": 2, 00:33:44.482 "base_bdevs_list": [ 00:33:44.482 { 00:33:44.482 "name": "spare", 00:33:44.482 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:44.482 "is_configured": true, 00:33:44.482 "data_offset": 0, 00:33:44.482 "data_size": 65536 00:33:44.482 }, 00:33:44.482 { 00:33:44.482 "name": "BaseBdev2", 00:33:44.482 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:44.482 "is_configured": true, 00:33:44.482 "data_offset": 0, 00:33:44.482 "data_size": 65536 00:33:44.482 } 00:33:44.482 ] 00:33:44.482 }' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.482 14:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.482 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.741 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.741 "name": "raid_bdev1", 00:33:44.741 "uuid": "5d9ea2f0-5903-4cc9-b723-b5e47510f134", 00:33:44.741 "strip_size_kb": 0, 00:33:44.741 "state": "online", 00:33:44.741 "raid_level": "raid1", 00:33:44.741 "superblock": false, 00:33:44.741 "num_base_bdevs": 2, 00:33:44.741 "num_base_bdevs_discovered": 2, 00:33:44.741 "num_base_bdevs_operational": 2, 00:33:44.741 "base_bdevs_list": [ 00:33:44.741 { 00:33:44.741 "name": "spare", 00:33:44.741 "uuid": "1409ddde-9e74-521c-82dc-6f57d2b93e98", 00:33:44.741 "is_configured": true, 00:33:44.741 "data_offset": 0, 00:33:44.741 "data_size": 65536 00:33:44.741 }, 00:33:44.741 { 00:33:44.741 "name": "BaseBdev2", 00:33:44.741 "uuid": "b9d4fb9b-cc33-5a03-9e4d-ac0fa94b0825", 00:33:44.741 "is_configured": true, 00:33:44.741 "data_offset": 0, 00:33:44.741 "data_size": 65536 00:33:44.741 } 00:33:44.741 ] 00:33:44.741 }' 00:33:44.741 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.741 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.000 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:45.000 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.000 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.001 [2024-10-09 14:02:51.435757] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:45.001 [2024-10-09 14:02:51.435788] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:45.001 [2024-10-09 14:02:51.435876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:45.001 [2024-10-09 14:02:51.435939] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:45.001 [2024-10-09 14:02:51.435957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:45.001 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:45.260 /dev/nbd0 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:45.260 1+0 records in 00:33:45.260 1+0 records out 00:33:45.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364384 s, 11.2 MB/s 00:33:45.260 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:45.520 14:02:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:45.520 /dev/nbd1 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:45.520 1+0 records in 00:33:45.520 1+0 records out 00:33:45.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361317 s, 11.3 MB/s 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:45.520 14:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:45.779 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:46.038 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86393 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86393 ']' 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86393 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86393 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:46.298 killing process with pid 86393 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86393' 00:33:46.298 Received shutdown signal, test time was about 60.000000 seconds 00:33:46.298 00:33:46.298 Latency(us) 00:33:46.298 [2024-10-09T14:02:52.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.298 [2024-10-09T14:02:52.849Z] =================================================================================================================== 00:33:46.298 [2024-10-09T14:02:52.849Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86393 00:33:46.298 [2024-10-09 14:02:52.755838] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:46.298 14:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86393 00:33:46.298 [2024-10-09 14:02:52.786770] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:33:46.557 00:33:46.557 real 0m14.687s 00:33:46.557 user 0m16.382s 00:33:46.557 sys 0m3.588s 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:46.557 ************************************ 00:33:46.557 END TEST raid_rebuild_test 00:33:46.557 ************************************ 00:33:46.557 14:02:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:33:46.557 14:02:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:46.557 14:02:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.557 14:02:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:46.557 ************************************ 00:33:46.557 START TEST raid_rebuild_test_sb 00:33:46.557 ************************************ 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86809 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86809 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86809 ']' 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:46.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:46.557 14:02:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:46.816 [2024-10-09 14:02:53.211042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:46.816 [2024-10-09 14:02:53.211223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86809 ] 00:33:46.816 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:46.816 Zero copy mechanism will not be used. 00:33:47.076 [2024-10-09 14:02:53.387411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.076 [2024-10-09 14:02:53.430483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.076 [2024-10-09 14:02:53.474062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:47.076 [2024-10-09 14:02:53.474100] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.644 BaseBdev1_malloc 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.644 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.644 [2024-10-09 14:02:54.074482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:47.644 [2024-10-09 14:02:54.074561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.644 [2024-10-09 14:02:54.074596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:47.644 [2024-10-09 14:02:54.074623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.644 [2024-10-09 14:02:54.077140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.644 [2024-10-09 14:02:54.077178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:47.644 BaseBdev1 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 BaseBdev2_malloc 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 [2024-10-09 14:02:54.108633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:47.645 [2024-10-09 14:02:54.108704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.645 [2024-10-09 14:02:54.108735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:47.645 [2024-10-09 14:02:54.108749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.645 [2024-10-09 14:02:54.111872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.645 [2024-10-09 14:02:54.112046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:47.645 BaseBdev2 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 spare_malloc 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 spare_delay 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 [2024-10-09 14:02:54.146072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:47.645 [2024-10-09 14:02:54.146137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.645 [2024-10-09 14:02:54.146163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:33:47.645 [2024-10-09 14:02:54.146174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.645 [2024-10-09 14:02:54.148700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.645 [2024-10-09 14:02:54.148736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:47.645 spare 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 [2024-10-09 14:02:54.158121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:47.645 [2024-10-09 14:02:54.160421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:47.645 [2024-10-09 14:02:54.160612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:33:47.645 [2024-10-09 14:02:54.160627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:47.645 [2024-10-09 14:02:54.160904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:47.645 [2024-10-09 14:02:54.161033] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:33:47.645 [2024-10-09 14:02:54.161048] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:33:47.645 [2024-10-09 14:02:54.161167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.645 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.904 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.904 "name": "raid_bdev1", 00:33:47.904 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:47.904 "strip_size_kb": 0, 00:33:47.904 "state": "online", 00:33:47.904 "raid_level": "raid1", 00:33:47.904 "superblock": true, 00:33:47.904 "num_base_bdevs": 2, 00:33:47.904 "num_base_bdevs_discovered": 2, 00:33:47.904 "num_base_bdevs_operational": 2, 00:33:47.904 "base_bdevs_list": [ 00:33:47.904 { 00:33:47.904 "name": "BaseBdev1", 00:33:47.904 "uuid": "d6c5d585-db65-56ec-8d7d-9310fe985021", 00:33:47.904 "is_configured": true, 00:33:47.904 "data_offset": 2048, 00:33:47.904 "data_size": 63488 00:33:47.904 }, 00:33:47.904 { 00:33:47.904 "name": "BaseBdev2", 00:33:47.904 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:47.904 "is_configured": true, 00:33:47.904 "data_offset": 2048, 00:33:47.904 "data_size": 63488 00:33:47.904 } 00:33:47.904 ] 00:33:47.904 }' 00:33:47.904 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.904 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:48.163 [2024-10-09 14:02:54.602482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:48.163 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:48.422 [2024-10-09 14:02:54.922309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:48.422 /dev/nbd0 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:48.422 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:48.422 1+0 records in 00:33:48.422 1+0 records out 00:33:48.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335932 s, 12.2 MB/s 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:33:48.681 14:02:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:33:53.993 63488+0 records in 00:33:53.993 63488+0 records out 00:33:53.993 32505856 bytes (33 MB, 31 MiB) copied, 5.07578 s, 6.4 MB/s 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:53.993 [2024-10-09 14:03:00.327045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.993 [2024-10-09 14:03:00.339187] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.993 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.994 "name": "raid_bdev1", 00:33:53.994 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:53.994 "strip_size_kb": 0, 00:33:53.994 "state": "online", 00:33:53.994 "raid_level": "raid1", 00:33:53.994 "superblock": true, 00:33:53.994 "num_base_bdevs": 2, 00:33:53.994 "num_base_bdevs_discovered": 1, 00:33:53.994 "num_base_bdevs_operational": 1, 00:33:53.994 "base_bdevs_list": [ 00:33:53.994 { 00:33:53.994 "name": null, 00:33:53.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.994 "is_configured": false, 00:33:53.994 "data_offset": 0, 00:33:53.994 "data_size": 63488 00:33:53.994 }, 00:33:53.994 { 00:33:53.994 "name": "BaseBdev2", 00:33:53.994 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:53.994 "is_configured": true, 00:33:53.994 "data_offset": 2048, 00:33:53.994 "data_size": 63488 00:33:53.994 } 00:33:53.994 ] 00:33:53.994 }' 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.994 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.253 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:54.253 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.253 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:54.253 [2024-10-09 14:03:00.731380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:54.253 [2024-10-09 14:03:00.739683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:33:54.253 14:03:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.253 14:03:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:54.253 [2024-10-09 14:03:00.742801] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:55.711 "name": "raid_bdev1", 00:33:55.711 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:55.711 "strip_size_kb": 0, 00:33:55.711 "state": "online", 00:33:55.711 "raid_level": "raid1", 00:33:55.711 "superblock": true, 00:33:55.711 "num_base_bdevs": 2, 00:33:55.711 "num_base_bdevs_discovered": 2, 00:33:55.711 "num_base_bdevs_operational": 2, 00:33:55.711 "process": { 00:33:55.711 "type": "rebuild", 00:33:55.711 "target": "spare", 00:33:55.711 "progress": { 00:33:55.711 "blocks": 20480, 00:33:55.711 "percent": 32 00:33:55.711 } 00:33:55.711 }, 00:33:55.711 "base_bdevs_list": [ 00:33:55.711 { 00:33:55.711 "name": "spare", 00:33:55.711 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:55.711 "is_configured": true, 00:33:55.711 "data_offset": 2048, 00:33:55.711 "data_size": 63488 00:33:55.711 }, 00:33:55.711 { 00:33:55.711 "name": "BaseBdev2", 00:33:55.711 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:55.711 "is_configured": true, 00:33:55.711 "data_offset": 2048, 00:33:55.711 "data_size": 63488 00:33:55.711 } 00:33:55.711 ] 00:33:55.711 }' 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.711 [2024-10-09 14:03:01.885291] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.711 [2024-10-09 14:03:01.955199] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:55.711 [2024-10-09 14:03:01.955332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.711 [2024-10-09 14:03:01.955373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:55.711 [2024-10-09 14:03:01.955384] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.711 14:03:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.711 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.711 "name": "raid_bdev1", 00:33:55.711 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:55.711 "strip_size_kb": 0, 00:33:55.711 "state": "online", 00:33:55.711 "raid_level": "raid1", 00:33:55.711 "superblock": true, 00:33:55.711 "num_base_bdevs": 2, 00:33:55.711 "num_base_bdevs_discovered": 1, 00:33:55.711 "num_base_bdevs_operational": 1, 00:33:55.711 "base_bdevs_list": [ 00:33:55.711 { 00:33:55.711 "name": null, 00:33:55.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.711 "is_configured": false, 00:33:55.711 "data_offset": 0, 00:33:55.711 "data_size": 63488 00:33:55.711 }, 00:33:55.711 { 00:33:55.711 "name": "BaseBdev2", 00:33:55.711 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:55.711 "is_configured": true, 00:33:55.711 "data_offset": 2048, 00:33:55.711 "data_size": 63488 00:33:55.711 } 00:33:55.711 ] 00:33:55.711 }' 00:33:55.711 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.711 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:55.970 "name": "raid_bdev1", 00:33:55.970 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:55.970 "strip_size_kb": 0, 00:33:55.970 "state": "online", 00:33:55.970 "raid_level": "raid1", 00:33:55.970 "superblock": true, 00:33:55.970 "num_base_bdevs": 2, 00:33:55.970 "num_base_bdevs_discovered": 1, 00:33:55.970 "num_base_bdevs_operational": 1, 00:33:55.970 "base_bdevs_list": [ 00:33:55.970 { 00:33:55.970 "name": null, 00:33:55.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.970 "is_configured": false, 00:33:55.970 "data_offset": 0, 00:33:55.970 "data_size": 63488 00:33:55.970 }, 00:33:55.970 { 00:33:55.970 "name": "BaseBdev2", 00:33:55.970 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:55.970 "is_configured": true, 00:33:55.970 "data_offset": 2048, 00:33:55.970 "data_size": 63488 00:33:55.970 } 00:33:55.970 ] 00:33:55.970 }' 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:55.970 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:56.229 [2024-10-09 14:03:02.548138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:56.229 [2024-10-09 14:03:02.556030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.229 14:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:56.229 [2024-10-09 14:03:02.558850] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:57.166 "name": "raid_bdev1", 00:33:57.166 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:57.166 "strip_size_kb": 0, 00:33:57.166 "state": "online", 00:33:57.166 "raid_level": "raid1", 00:33:57.166 "superblock": true, 00:33:57.166 "num_base_bdevs": 2, 00:33:57.166 "num_base_bdevs_discovered": 2, 00:33:57.166 "num_base_bdevs_operational": 2, 00:33:57.166 "process": { 00:33:57.166 "type": "rebuild", 00:33:57.166 "target": "spare", 00:33:57.166 "progress": { 00:33:57.166 "blocks": 20480, 00:33:57.166 "percent": 32 00:33:57.166 } 00:33:57.166 }, 00:33:57.166 "base_bdevs_list": [ 00:33:57.166 { 00:33:57.166 "name": "spare", 00:33:57.166 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:57.166 "is_configured": true, 00:33:57.166 "data_offset": 2048, 00:33:57.166 "data_size": 63488 00:33:57.166 }, 00:33:57.166 { 00:33:57.166 "name": "BaseBdev2", 00:33:57.166 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:57.166 "is_configured": true, 00:33:57.166 "data_offset": 2048, 00:33:57.166 "data_size": 63488 00:33:57.166 } 00:33:57.166 ] 00:33:57.166 }' 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:57.166 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:57.167 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=317 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:57.167 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:57.426 "name": "raid_bdev1", 00:33:57.426 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:57.426 "strip_size_kb": 0, 00:33:57.426 "state": "online", 00:33:57.426 "raid_level": "raid1", 00:33:57.426 "superblock": true, 00:33:57.426 "num_base_bdevs": 2, 00:33:57.426 "num_base_bdevs_discovered": 2, 00:33:57.426 "num_base_bdevs_operational": 2, 00:33:57.426 "process": { 00:33:57.426 "type": "rebuild", 00:33:57.426 "target": "spare", 00:33:57.426 "progress": { 00:33:57.426 "blocks": 22528, 00:33:57.426 "percent": 35 00:33:57.426 } 00:33:57.426 }, 00:33:57.426 "base_bdevs_list": [ 00:33:57.426 { 00:33:57.426 "name": "spare", 00:33:57.426 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:57.426 "is_configured": true, 00:33:57.426 "data_offset": 2048, 00:33:57.426 "data_size": 63488 00:33:57.426 }, 00:33:57.426 { 00:33:57.426 "name": "BaseBdev2", 00:33:57.426 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:57.426 "is_configured": true, 00:33:57.426 "data_offset": 2048, 00:33:57.426 "data_size": 63488 00:33:57.426 } 00:33:57.426 ] 00:33:57.426 }' 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.426 14:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:58.362 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:58.363 "name": "raid_bdev1", 00:33:58.363 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:58.363 "strip_size_kb": 0, 00:33:58.363 "state": "online", 00:33:58.363 "raid_level": "raid1", 00:33:58.363 "superblock": true, 00:33:58.363 "num_base_bdevs": 2, 00:33:58.363 "num_base_bdevs_discovered": 2, 00:33:58.363 "num_base_bdevs_operational": 2, 00:33:58.363 "process": { 00:33:58.363 "type": "rebuild", 00:33:58.363 "target": "spare", 00:33:58.363 "progress": { 00:33:58.363 "blocks": 45056, 00:33:58.363 "percent": 70 00:33:58.363 } 00:33:58.363 }, 00:33:58.363 "base_bdevs_list": [ 00:33:58.363 { 00:33:58.363 "name": "spare", 00:33:58.363 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:58.363 "is_configured": true, 00:33:58.363 "data_offset": 2048, 00:33:58.363 "data_size": 63488 00:33:58.363 }, 00:33:58.363 { 00:33:58.363 "name": "BaseBdev2", 00:33:58.363 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:58.363 "is_configured": true, 00:33:58.363 "data_offset": 2048, 00:33:58.363 "data_size": 63488 00:33:58.363 } 00:33:58.363 ] 00:33:58.363 }' 00:33:58.363 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:58.622 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:58.622 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:58.622 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:58.622 14:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:59.190 [2024-10-09 14:03:05.688263] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:59.190 [2024-10-09 14:03:05.688375] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:59.190 [2024-10-09 14:03:05.688499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.449 14:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.708 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.708 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.708 "name": "raid_bdev1", 00:33:59.708 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:59.708 "strip_size_kb": 0, 00:33:59.708 "state": "online", 00:33:59.709 "raid_level": "raid1", 00:33:59.709 "superblock": true, 00:33:59.709 "num_base_bdevs": 2, 00:33:59.709 "num_base_bdevs_discovered": 2, 00:33:59.709 "num_base_bdevs_operational": 2, 00:33:59.709 "base_bdevs_list": [ 00:33:59.709 { 00:33:59.709 "name": "spare", 00:33:59.709 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:59.709 "is_configured": true, 00:33:59.709 "data_offset": 2048, 00:33:59.709 "data_size": 63488 00:33:59.709 }, 00:33:59.709 { 00:33:59.709 "name": "BaseBdev2", 00:33:59.709 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:59.709 "is_configured": true, 00:33:59.709 "data_offset": 2048, 00:33:59.709 "data_size": 63488 00:33:59.709 } 00:33:59.709 ] 00:33:59.709 }' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:59.709 "name": "raid_bdev1", 00:33:59.709 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:59.709 "strip_size_kb": 0, 00:33:59.709 "state": "online", 00:33:59.709 "raid_level": "raid1", 00:33:59.709 "superblock": true, 00:33:59.709 "num_base_bdevs": 2, 00:33:59.709 "num_base_bdevs_discovered": 2, 00:33:59.709 "num_base_bdevs_operational": 2, 00:33:59.709 "base_bdevs_list": [ 00:33:59.709 { 00:33:59.709 "name": "spare", 00:33:59.709 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:59.709 "is_configured": true, 00:33:59.709 "data_offset": 2048, 00:33:59.709 "data_size": 63488 00:33:59.709 }, 00:33:59.709 { 00:33:59.709 "name": "BaseBdev2", 00:33:59.709 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:59.709 "is_configured": true, 00:33:59.709 "data_offset": 2048, 00:33:59.709 "data_size": 63488 00:33:59.709 } 00:33:59.709 ] 00:33:59.709 }' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:59.709 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:59.968 "name": "raid_bdev1", 00:33:59.968 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:33:59.968 "strip_size_kb": 0, 00:33:59.968 "state": "online", 00:33:59.968 "raid_level": "raid1", 00:33:59.968 "superblock": true, 00:33:59.968 "num_base_bdevs": 2, 00:33:59.968 "num_base_bdevs_discovered": 2, 00:33:59.968 "num_base_bdevs_operational": 2, 00:33:59.968 "base_bdevs_list": [ 00:33:59.968 { 00:33:59.968 "name": "spare", 00:33:59.968 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:33:59.968 "is_configured": true, 00:33:59.968 "data_offset": 2048, 00:33:59.968 "data_size": 63488 00:33:59.968 }, 00:33:59.968 { 00:33:59.968 "name": "BaseBdev2", 00:33:59.968 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:33:59.968 "is_configured": true, 00:33:59.968 "data_offset": 2048, 00:33:59.968 "data_size": 63488 00:33:59.968 } 00:33:59.968 ] 00:33:59.968 }' 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:59.968 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 [2024-10-09 14:03:06.720542] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:00.227 [2024-10-09 14:03:06.720605] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:00.227 [2024-10-09 14:03:06.720744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:00.227 [2024-10-09 14:03:06.720833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:00.227 [2024-10-09 14:03:06.720852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:00.227 14:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:00.796 /dev/nbd0 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:00.796 1+0 records in 00:34:00.796 1+0 records out 00:34:00.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435466 s, 9.4 MB/s 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:00.796 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:01.055 /dev/nbd1 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:01.055 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:01.055 1+0 records in 00:34:01.056 1+0 records out 00:34:01.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425958 s, 9.6 MB/s 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:01.056 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:01.317 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.577 [2024-10-09 14:03:07.938588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:01.577 [2024-10-09 14:03:07.938676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.577 [2024-10-09 14:03:07.938716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:01.577 [2024-10-09 14:03:07.938738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.577 [2024-10-09 14:03:07.942013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.577 [2024-10-09 14:03:07.942060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:01.577 [2024-10-09 14:03:07.942157] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:01.577 [2024-10-09 14:03:07.942213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:01.577 [2024-10-09 14:03:07.942349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:01.577 spare 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.577 14:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.577 [2024-10-09 14:03:08.042468] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:34:01.577 [2024-10-09 14:03:08.042513] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:01.577 [2024-10-09 14:03:08.042973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:34:01.577 [2024-10-09 14:03:08.043218] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:34:01.577 [2024-10-09 14:03:08.043243] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:34:01.577 [2024-10-09 14:03:08.043455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:01.577 "name": "raid_bdev1", 00:34:01.577 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:01.577 "strip_size_kb": 0, 00:34:01.577 "state": "online", 00:34:01.577 "raid_level": "raid1", 00:34:01.577 "superblock": true, 00:34:01.577 "num_base_bdevs": 2, 00:34:01.577 "num_base_bdevs_discovered": 2, 00:34:01.577 "num_base_bdevs_operational": 2, 00:34:01.577 "base_bdevs_list": [ 00:34:01.577 { 00:34:01.577 "name": "spare", 00:34:01.577 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:34:01.577 "is_configured": true, 00:34:01.577 "data_offset": 2048, 00:34:01.577 "data_size": 63488 00:34:01.577 }, 00:34:01.577 { 00:34:01.577 "name": "BaseBdev2", 00:34:01.577 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:01.577 "is_configured": true, 00:34:01.577 "data_offset": 2048, 00:34:01.577 "data_size": 63488 00:34:01.577 } 00:34:01.577 ] 00:34:01.577 }' 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:01.577 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:02.146 "name": "raid_bdev1", 00:34:02.146 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:02.146 "strip_size_kb": 0, 00:34:02.146 "state": "online", 00:34:02.146 "raid_level": "raid1", 00:34:02.146 "superblock": true, 00:34:02.146 "num_base_bdevs": 2, 00:34:02.146 "num_base_bdevs_discovered": 2, 00:34:02.146 "num_base_bdevs_operational": 2, 00:34:02.146 "base_bdevs_list": [ 00:34:02.146 { 00:34:02.146 "name": "spare", 00:34:02.146 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:34:02.146 "is_configured": true, 00:34:02.146 "data_offset": 2048, 00:34:02.146 "data_size": 63488 00:34:02.146 }, 00:34:02.146 { 00:34:02.146 "name": "BaseBdev2", 00:34:02.146 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:02.146 "is_configured": true, 00:34:02.146 "data_offset": 2048, 00:34:02.146 "data_size": 63488 00:34:02.146 } 00:34:02.146 ] 00:34:02.146 }' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.146 [2024-10-09 14:03:08.679574] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.146 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.406 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.406 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.406 "name": "raid_bdev1", 00:34:02.406 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:02.406 "strip_size_kb": 0, 00:34:02.406 "state": "online", 00:34:02.406 "raid_level": "raid1", 00:34:02.406 "superblock": true, 00:34:02.406 "num_base_bdevs": 2, 00:34:02.406 "num_base_bdevs_discovered": 1, 00:34:02.406 "num_base_bdevs_operational": 1, 00:34:02.406 "base_bdevs_list": [ 00:34:02.406 { 00:34:02.406 "name": null, 00:34:02.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.406 "is_configured": false, 00:34:02.406 "data_offset": 0, 00:34:02.406 "data_size": 63488 00:34:02.406 }, 00:34:02.406 { 00:34:02.406 "name": "BaseBdev2", 00:34:02.406 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:02.406 "is_configured": true, 00:34:02.406 "data_offset": 2048, 00:34:02.406 "data_size": 63488 00:34:02.406 } 00:34:02.406 ] 00:34:02.406 }' 00:34:02.406 14:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.406 14:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.665 14:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:02.665 14:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.665 14:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.665 [2024-10-09 14:03:09.151740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:02.665 [2024-10-09 14:03:09.151998] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:02.665 [2024-10-09 14:03:09.152016] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:02.665 [2024-10-09 14:03:09.152070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:02.665 [2024-10-09 14:03:09.159536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:34:02.665 14:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.665 14:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:02.665 [2024-10-09 14:03:09.162232] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:04.043 "name": "raid_bdev1", 00:34:04.043 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:04.043 "strip_size_kb": 0, 00:34:04.043 "state": "online", 00:34:04.043 "raid_level": "raid1", 00:34:04.043 "superblock": true, 00:34:04.043 "num_base_bdevs": 2, 00:34:04.043 "num_base_bdevs_discovered": 2, 00:34:04.043 "num_base_bdevs_operational": 2, 00:34:04.043 "process": { 00:34:04.043 "type": "rebuild", 00:34:04.043 "target": "spare", 00:34:04.043 "progress": { 00:34:04.043 "blocks": 20480, 00:34:04.043 "percent": 32 00:34:04.043 } 00:34:04.043 }, 00:34:04.043 "base_bdevs_list": [ 00:34:04.043 { 00:34:04.043 "name": "spare", 00:34:04.043 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:34:04.043 "is_configured": true, 00:34:04.043 "data_offset": 2048, 00:34:04.043 "data_size": 63488 00:34:04.043 }, 00:34:04.043 { 00:34:04.043 "name": "BaseBdev2", 00:34:04.043 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:04.043 "is_configured": true, 00:34:04.043 "data_offset": 2048, 00:34:04.043 "data_size": 63488 00:34:04.043 } 00:34:04.043 ] 00:34:04.043 }' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.043 [2024-10-09 14:03:10.304521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:04.043 [2024-10-09 14:03:10.372307] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:04.043 [2024-10-09 14:03:10.372412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.043 [2024-10-09 14:03:10.372437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:04.043 [2024-10-09 14:03:10.372448] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:04.043 "name": "raid_bdev1", 00:34:04.043 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:04.043 "strip_size_kb": 0, 00:34:04.043 "state": "online", 00:34:04.043 "raid_level": "raid1", 00:34:04.043 "superblock": true, 00:34:04.043 "num_base_bdevs": 2, 00:34:04.043 "num_base_bdevs_discovered": 1, 00:34:04.043 "num_base_bdevs_operational": 1, 00:34:04.043 "base_bdevs_list": [ 00:34:04.043 { 00:34:04.043 "name": null, 00:34:04.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.043 "is_configured": false, 00:34:04.043 "data_offset": 0, 00:34:04.043 "data_size": 63488 00:34:04.043 }, 00:34:04.043 { 00:34:04.043 "name": "BaseBdev2", 00:34:04.043 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:04.043 "is_configured": true, 00:34:04.043 "data_offset": 2048, 00:34:04.043 "data_size": 63488 00:34:04.043 } 00:34:04.043 ] 00:34:04.043 }' 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:04.043 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.303 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:04.303 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.303 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.303 [2024-10-09 14:03:10.844408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:04.303 [2024-10-09 14:03:10.844530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.303 [2024-10-09 14:03:10.844596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:04.303 [2024-10-09 14:03:10.844618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.303 [2024-10-09 14:03:10.845373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.303 [2024-10-09 14:03:10.845412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:04.303 [2024-10-09 14:03:10.845585] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:04.303 [2024-10-09 14:03:10.845609] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:04.303 [2024-10-09 14:03:10.845659] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:04.303 [2024-10-09 14:03:10.845700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:04.561 [2024-10-09 14:03:10.854672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:34:04.561 spare 00:34:04.561 14:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.561 14:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:04.561 [2024-10-09 14:03:10.857916] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:05.500 "name": "raid_bdev1", 00:34:05.500 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:05.500 "strip_size_kb": 0, 00:34:05.500 "state": "online", 00:34:05.500 "raid_level": "raid1", 00:34:05.500 "superblock": true, 00:34:05.500 "num_base_bdevs": 2, 00:34:05.500 "num_base_bdevs_discovered": 2, 00:34:05.500 "num_base_bdevs_operational": 2, 00:34:05.500 "process": { 00:34:05.500 "type": "rebuild", 00:34:05.500 "target": "spare", 00:34:05.500 "progress": { 00:34:05.500 "blocks": 20480, 00:34:05.500 "percent": 32 00:34:05.500 } 00:34:05.500 }, 00:34:05.500 "base_bdevs_list": [ 00:34:05.500 { 00:34:05.500 "name": "spare", 00:34:05.500 "uuid": "faa1988a-65a9-5c77-a182-6fbfcec3bfa1", 00:34:05.500 "is_configured": true, 00:34:05.500 "data_offset": 2048, 00:34:05.500 "data_size": 63488 00:34:05.500 }, 00:34:05.500 { 00:34:05.500 "name": "BaseBdev2", 00:34:05.500 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:05.500 "is_configured": true, 00:34:05.500 "data_offset": 2048, 00:34:05.500 "data_size": 63488 00:34:05.500 } 00:34:05.500 ] 00:34:05.500 }' 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:05.500 14:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:05.500 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:05.500 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:05.500 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.500 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.500 [2024-10-09 14:03:12.007629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:05.760 [2024-10-09 14:03:12.070041] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:05.760 [2024-10-09 14:03:12.070287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:05.760 [2024-10-09 14:03:12.070310] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:05.760 [2024-10-09 14:03:12.070325] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:05.760 "name": "raid_bdev1", 00:34:05.760 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:05.760 "strip_size_kb": 0, 00:34:05.760 "state": "online", 00:34:05.760 "raid_level": "raid1", 00:34:05.760 "superblock": true, 00:34:05.760 "num_base_bdevs": 2, 00:34:05.760 "num_base_bdevs_discovered": 1, 00:34:05.760 "num_base_bdevs_operational": 1, 00:34:05.760 "base_bdevs_list": [ 00:34:05.760 { 00:34:05.760 "name": null, 00:34:05.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.760 "is_configured": false, 00:34:05.760 "data_offset": 0, 00:34:05.760 "data_size": 63488 00:34:05.760 }, 00:34:05.760 { 00:34:05.760 "name": "BaseBdev2", 00:34:05.760 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:05.760 "is_configured": true, 00:34:05.760 "data_offset": 2048, 00:34:05.760 "data_size": 63488 00:34:05.760 } 00:34:05.760 ] 00:34:05.760 }' 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:05.760 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.019 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:06.278 "name": "raid_bdev1", 00:34:06.278 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:06.278 "strip_size_kb": 0, 00:34:06.278 "state": "online", 00:34:06.278 "raid_level": "raid1", 00:34:06.278 "superblock": true, 00:34:06.278 "num_base_bdevs": 2, 00:34:06.278 "num_base_bdevs_discovered": 1, 00:34:06.278 "num_base_bdevs_operational": 1, 00:34:06.278 "base_bdevs_list": [ 00:34:06.278 { 00:34:06.278 "name": null, 00:34:06.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.278 "is_configured": false, 00:34:06.278 "data_offset": 0, 00:34:06.278 "data_size": 63488 00:34:06.278 }, 00:34:06.278 { 00:34:06.278 "name": "BaseBdev2", 00:34:06.278 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:06.278 "is_configured": true, 00:34:06.278 "data_offset": 2048, 00:34:06.278 "data_size": 63488 00:34:06.278 } 00:34:06.278 ] 00:34:06.278 }' 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.278 [2024-10-09 14:03:12.697864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:06.278 [2024-10-09 14:03:12.697948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:06.278 [2024-10-09 14:03:12.697974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:06.278 [2024-10-09 14:03:12.697990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:06.278 [2024-10-09 14:03:12.698491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:06.278 [2024-10-09 14:03:12.698524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:06.278 [2024-10-09 14:03:12.698626] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:06.278 [2024-10-09 14:03:12.698660] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:06.278 [2024-10-09 14:03:12.698671] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:06.278 [2024-10-09 14:03:12.698691] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:06.278 BaseBdev1 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.278 14:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.215 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.216 "name": "raid_bdev1", 00:34:07.216 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:07.216 "strip_size_kb": 0, 00:34:07.216 "state": "online", 00:34:07.216 "raid_level": "raid1", 00:34:07.216 "superblock": true, 00:34:07.216 "num_base_bdevs": 2, 00:34:07.216 "num_base_bdevs_discovered": 1, 00:34:07.216 "num_base_bdevs_operational": 1, 00:34:07.216 "base_bdevs_list": [ 00:34:07.216 { 00:34:07.216 "name": null, 00:34:07.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.216 "is_configured": false, 00:34:07.216 "data_offset": 0, 00:34:07.216 "data_size": 63488 00:34:07.216 }, 00:34:07.216 { 00:34:07.216 "name": "BaseBdev2", 00:34:07.216 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:07.216 "is_configured": true, 00:34:07.216 "data_offset": 2048, 00:34:07.216 "data_size": 63488 00:34:07.216 } 00:34:07.216 ] 00:34:07.216 }' 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.216 14:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:07.782 "name": "raid_bdev1", 00:34:07.782 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:07.782 "strip_size_kb": 0, 00:34:07.782 "state": "online", 00:34:07.782 "raid_level": "raid1", 00:34:07.782 "superblock": true, 00:34:07.782 "num_base_bdevs": 2, 00:34:07.782 "num_base_bdevs_discovered": 1, 00:34:07.782 "num_base_bdevs_operational": 1, 00:34:07.782 "base_bdevs_list": [ 00:34:07.782 { 00:34:07.782 "name": null, 00:34:07.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.782 "is_configured": false, 00:34:07.782 "data_offset": 0, 00:34:07.782 "data_size": 63488 00:34:07.782 }, 00:34:07.782 { 00:34:07.782 "name": "BaseBdev2", 00:34:07.782 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:07.782 "is_configured": true, 00:34:07.782 "data_offset": 2048, 00:34:07.782 "data_size": 63488 00:34:07.782 } 00:34:07.782 ] 00:34:07.782 }' 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.782 [2024-10-09 14:03:14.290241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:07.782 [2024-10-09 14:03:14.290471] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:07.782 [2024-10-09 14:03:14.290488] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:07.782 request: 00:34:07.782 { 00:34:07.782 "base_bdev": "BaseBdev1", 00:34:07.782 "raid_bdev": "raid_bdev1", 00:34:07.782 "method": "bdev_raid_add_base_bdev", 00:34:07.782 "req_id": 1 00:34:07.782 } 00:34:07.782 Got JSON-RPC error response 00:34:07.782 response: 00:34:07.782 { 00:34:07.782 "code": -22, 00:34:07.782 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:07.782 } 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:07.782 14:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:09.209 "name": "raid_bdev1", 00:34:09.209 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:09.209 "strip_size_kb": 0, 00:34:09.209 "state": "online", 00:34:09.209 "raid_level": "raid1", 00:34:09.209 "superblock": true, 00:34:09.209 "num_base_bdevs": 2, 00:34:09.209 "num_base_bdevs_discovered": 1, 00:34:09.209 "num_base_bdevs_operational": 1, 00:34:09.209 "base_bdevs_list": [ 00:34:09.209 { 00:34:09.209 "name": null, 00:34:09.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.209 "is_configured": false, 00:34:09.209 "data_offset": 0, 00:34:09.209 "data_size": 63488 00:34:09.209 }, 00:34:09.209 { 00:34:09.209 "name": "BaseBdev2", 00:34:09.209 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:09.209 "is_configured": true, 00:34:09.209 "data_offset": 2048, 00:34:09.209 "data_size": 63488 00:34:09.209 } 00:34:09.209 ] 00:34:09.209 }' 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:09.209 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:09.478 "name": "raid_bdev1", 00:34:09.478 "uuid": "9edb9022-2676-4948-9bd5-7d896443a4b0", 00:34:09.478 "strip_size_kb": 0, 00:34:09.478 "state": "online", 00:34:09.478 "raid_level": "raid1", 00:34:09.478 "superblock": true, 00:34:09.478 "num_base_bdevs": 2, 00:34:09.478 "num_base_bdevs_discovered": 1, 00:34:09.478 "num_base_bdevs_operational": 1, 00:34:09.478 "base_bdevs_list": [ 00:34:09.478 { 00:34:09.478 "name": null, 00:34:09.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.478 "is_configured": false, 00:34:09.478 "data_offset": 0, 00:34:09.478 "data_size": 63488 00:34:09.478 }, 00:34:09.478 { 00:34:09.478 "name": "BaseBdev2", 00:34:09.478 "uuid": "ec4bd097-2088-55ed-8a8a-5549845f902d", 00:34:09.478 "is_configured": true, 00:34:09.478 "data_offset": 2048, 00:34:09.478 "data_size": 63488 00:34:09.478 } 00:34:09.478 ] 00:34:09.478 }' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86809 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86809 ']' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86809 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86809 00:34:09.478 killing process with pid 86809 00:34:09.478 Received shutdown signal, test time was about 60.000000 seconds 00:34:09.478 00:34:09.478 Latency(us) 00:34:09.478 [2024-10-09T14:03:16.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.478 [2024-10-09T14:03:16.029Z] =================================================================================================================== 00:34:09.478 [2024-10-09T14:03:16.029Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86809' 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86809 00:34:09.478 [2024-10-09 14:03:15.942413] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.478 14:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86809 00:34:09.478 [2024-10-09 14:03:15.942665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.479 [2024-10-09 14:03:15.942741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.479 [2024-10-09 14:03:15.942756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:34:09.479 [2024-10-09 14:03:16.002578] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:34:10.050 00:34:10.050 real 0m23.296s 00:34:10.050 user 0m28.030s 00:34:10.050 sys 0m4.504s 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.050 ************************************ 00:34:10.050 END TEST raid_rebuild_test_sb 00:34:10.050 ************************************ 00:34:10.050 14:03:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:34:10.050 14:03:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:10.050 14:03:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:10.050 14:03:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:10.050 ************************************ 00:34:10.050 START TEST raid_rebuild_test_io 00:34:10.050 ************************************ 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87541 00:34:10.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87541 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87541 ']' 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.050 14:03:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:10.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:10.050 Zero copy mechanism will not be used. 00:34:10.050 [2024-10-09 14:03:16.584737] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:10.050 [2024-10-09 14:03:16.584943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87541 ] 00:34:10.309 [2024-10-09 14:03:16.762543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.309 [2024-10-09 14:03:16.849410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.567 [2024-10-09 14:03:16.932666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:10.567 [2024-10-09 14:03:16.932726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 BaseBdev1_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 [2024-10-09 14:03:17.525021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:11.135 [2024-10-09 14:03:17.525118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.135 [2024-10-09 14:03:17.525163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:11.135 [2024-10-09 14:03:17.525196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.135 [2024-10-09 14:03:17.528436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.135 [2024-10-09 14:03:17.528478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:11.135 BaseBdev1 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 BaseBdev2_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 [2024-10-09 14:03:17.571521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:11.135 [2024-10-09 14:03:17.571611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.135 [2024-10-09 14:03:17.571643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:11.135 [2024-10-09 14:03:17.571655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.135 [2024-10-09 14:03:17.574862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.135 [2024-10-09 14:03:17.575140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:11.135 BaseBdev2 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 spare_malloc 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 spare_delay 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 [2024-10-09 14:03:17.616105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:11.135 [2024-10-09 14:03:17.616195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.135 [2024-10-09 14:03:17.616225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:11.135 [2024-10-09 14:03:17.616237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.135 [2024-10-09 14:03:17.619507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.135 [2024-10-09 14:03:17.619559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:11.135 spare 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 [2024-10-09 14:03:17.624320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:11.135 [2024-10-09 14:03:17.627185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:11.135 [2024-10-09 14:03:17.627294] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:34:11.135 [2024-10-09 14:03:17.627310] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:11.135 [2024-10-09 14:03:17.627705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:11.135 [2024-10-09 14:03:17.627853] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:34:11.135 [2024-10-09 14:03:17.627874] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:34:11.135 [2024-10-09 14:03:17.628017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.135 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.135 "name": "raid_bdev1", 00:34:11.135 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:11.135 "strip_size_kb": 0, 00:34:11.135 "state": "online", 00:34:11.135 "raid_level": "raid1", 00:34:11.135 "superblock": false, 00:34:11.135 "num_base_bdevs": 2, 00:34:11.135 "num_base_bdevs_discovered": 2, 00:34:11.135 "num_base_bdevs_operational": 2, 00:34:11.135 "base_bdevs_list": [ 00:34:11.136 { 00:34:11.136 "name": "BaseBdev1", 00:34:11.136 "uuid": "ba9055bc-8923-56ab-bbb6-3c0b05a60c54", 00:34:11.136 "is_configured": true, 00:34:11.136 "data_offset": 0, 00:34:11.136 "data_size": 65536 00:34:11.136 }, 00:34:11.136 { 00:34:11.136 "name": "BaseBdev2", 00:34:11.136 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:11.136 "is_configured": true, 00:34:11.136 "data_offset": 0, 00:34:11.136 "data_size": 65536 00:34:11.136 } 00:34:11.136 ] 00:34:11.136 }' 00:34:11.136 14:03:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.136 14:03:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.704 [2024-10-09 14:03:18.080722] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.704 [2024-10-09 14:03:18.172385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.704 "name": "raid_bdev1", 00:34:11.704 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:11.704 "strip_size_kb": 0, 00:34:11.704 "state": "online", 00:34:11.704 "raid_level": "raid1", 00:34:11.704 "superblock": false, 00:34:11.704 "num_base_bdevs": 2, 00:34:11.704 "num_base_bdevs_discovered": 1, 00:34:11.704 "num_base_bdevs_operational": 1, 00:34:11.704 "base_bdevs_list": [ 00:34:11.704 { 00:34:11.704 "name": null, 00:34:11.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.704 "is_configured": false, 00:34:11.704 "data_offset": 0, 00:34:11.704 "data_size": 65536 00:34:11.704 }, 00:34:11.704 { 00:34:11.704 "name": "BaseBdev2", 00:34:11.704 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:11.704 "is_configured": true, 00:34:11.704 "data_offset": 0, 00:34:11.704 "data_size": 65536 00:34:11.704 } 00:34:11.704 ] 00:34:11.704 }' 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.704 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:11.963 [2024-10-09 14:03:18.290583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:11.963 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:11.963 Zero copy mechanism will not be used. 00:34:11.963 Running I/O for 60 seconds... 00:34:12.222 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:12.222 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.222 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:12.222 [2024-10-09 14:03:18.628350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:12.222 14:03:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.222 14:03:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:12.222 [2024-10-09 14:03:18.666816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:12.222 [2024-10-09 14:03:18.669153] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:12.481 [2024-10-09 14:03:18.787818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:12.481 [2024-10-09 14:03:18.788268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:12.481 [2024-10-09 14:03:18.997002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:12.481 [2024-10-09 14:03:18.997536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:12.739 [2024-10-09 14:03:19.250775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:12.997 178.00 IOPS, 534.00 MiB/s [2024-10-09T14:03:19.548Z] [2024-10-09 14:03:19.470911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:12.997 [2024-10-09 14:03:19.471147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:13.256 "name": "raid_bdev1", 00:34:13.256 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:13.256 "strip_size_kb": 0, 00:34:13.256 "state": "online", 00:34:13.256 "raid_level": "raid1", 00:34:13.256 "superblock": false, 00:34:13.256 "num_base_bdevs": 2, 00:34:13.256 "num_base_bdevs_discovered": 2, 00:34:13.256 "num_base_bdevs_operational": 2, 00:34:13.256 "process": { 00:34:13.256 "type": "rebuild", 00:34:13.256 "target": "spare", 00:34:13.256 "progress": { 00:34:13.256 "blocks": 10240, 00:34:13.256 "percent": 15 00:34:13.256 } 00:34:13.256 }, 00:34:13.256 "base_bdevs_list": [ 00:34:13.256 { 00:34:13.256 "name": "spare", 00:34:13.256 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:13.256 "is_configured": true, 00:34:13.256 "data_offset": 0, 00:34:13.256 "data_size": 65536 00:34:13.256 }, 00:34:13.256 { 00:34:13.256 "name": "BaseBdev2", 00:34:13.256 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:13.256 "is_configured": true, 00:34:13.256 "data_offset": 0, 00:34:13.256 "data_size": 65536 00:34:13.256 } 00:34:13.256 ] 00:34:13.256 }' 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:13.256 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:13.257 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:13.516 [2024-10-09 14:03:19.806371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:13.516 [2024-10-09 14:03:19.806969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:13.516 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:13.516 14:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:13.516 14:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.516 14:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:13.516 [2024-10-09 14:03:19.821436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:13.516 [2024-10-09 14:03:19.921493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:13.516 [2024-10-09 14:03:20.034271] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:13.516 [2024-10-09 14:03:20.042548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.516 [2024-10-09 14:03:20.042606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:13.516 [2024-10-09 14:03:20.042622] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:13.516 [2024-10-09 14:03:20.060688] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.775 "name": "raid_bdev1", 00:34:13.775 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:13.775 "strip_size_kb": 0, 00:34:13.775 "state": "online", 00:34:13.775 "raid_level": "raid1", 00:34:13.775 "superblock": false, 00:34:13.775 "num_base_bdevs": 2, 00:34:13.775 "num_base_bdevs_discovered": 1, 00:34:13.775 "num_base_bdevs_operational": 1, 00:34:13.775 "base_bdevs_list": [ 00:34:13.775 { 00:34:13.775 "name": null, 00:34:13.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:13.775 "is_configured": false, 00:34:13.775 "data_offset": 0, 00:34:13.775 "data_size": 65536 00:34:13.775 }, 00:34:13.775 { 00:34:13.775 "name": "BaseBdev2", 00:34:13.775 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:13.775 "is_configured": true, 00:34:13.775 "data_offset": 0, 00:34:13.775 "data_size": 65536 00:34:13.775 } 00:34:13.775 ] 00:34:13.775 }' 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.775 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:14.034 157.00 IOPS, 471.00 MiB/s [2024-10-09T14:03:20.585Z] 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.034 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.035 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.035 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:14.035 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.035 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:14.035 "name": "raid_bdev1", 00:34:14.035 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:14.035 "strip_size_kb": 0, 00:34:14.035 "state": "online", 00:34:14.035 "raid_level": "raid1", 00:34:14.035 "superblock": false, 00:34:14.035 "num_base_bdevs": 2, 00:34:14.035 "num_base_bdevs_discovered": 1, 00:34:14.035 "num_base_bdevs_operational": 1, 00:34:14.035 "base_bdevs_list": [ 00:34:14.035 { 00:34:14.035 "name": null, 00:34:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.035 "is_configured": false, 00:34:14.035 "data_offset": 0, 00:34:14.035 "data_size": 65536 00:34:14.035 }, 00:34:14.035 { 00:34:14.035 "name": "BaseBdev2", 00:34:14.035 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:14.035 "is_configured": true, 00:34:14.035 "data_offset": 0, 00:34:14.035 "data_size": 65536 00:34:14.035 } 00:34:14.035 ] 00:34:14.035 }' 00:34:14.035 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:14.294 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:14.295 [2024-10-09 14:03:20.641498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.295 14:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:14.295 [2024-10-09 14:03:20.697127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:14.295 [2024-10-09 14:03:20.699382] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:14.295 [2024-10-09 14:03:20.824226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:14.295 [2024-10-09 14:03:20.824718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:14.554 [2024-10-09 14:03:21.057092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:14.554 [2024-10-09 14:03:21.057353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:15.071 159.00 IOPS, 477.00 MiB/s [2024-10-09T14:03:21.622Z] [2024-10-09 14:03:21.506180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:15.071 [2024-10-09 14:03:21.506409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:15.332 "name": "raid_bdev1", 00:34:15.332 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:15.332 "strip_size_kb": 0, 00:34:15.332 "state": "online", 00:34:15.332 "raid_level": "raid1", 00:34:15.332 "superblock": false, 00:34:15.332 "num_base_bdevs": 2, 00:34:15.332 "num_base_bdevs_discovered": 2, 00:34:15.332 "num_base_bdevs_operational": 2, 00:34:15.332 "process": { 00:34:15.332 "type": "rebuild", 00:34:15.332 "target": "spare", 00:34:15.332 "progress": { 00:34:15.332 "blocks": 10240, 00:34:15.332 "percent": 15 00:34:15.332 } 00:34:15.332 }, 00:34:15.332 "base_bdevs_list": [ 00:34:15.332 { 00:34:15.332 "name": "spare", 00:34:15.332 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:15.332 "is_configured": true, 00:34:15.332 "data_offset": 0, 00:34:15.332 "data_size": 65536 00:34:15.332 }, 00:34:15.332 { 00:34:15.332 "name": "BaseBdev2", 00:34:15.332 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:15.332 "is_configured": true, 00:34:15.332 "data_offset": 0, 00:34:15.332 "data_size": 65536 00:34:15.332 } 00:34:15.332 ] 00:34:15.332 }' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=335 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:15.332 [2024-10-09 14:03:21.832585] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:15.332 "name": "raid_bdev1", 00:34:15.332 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:15.332 "strip_size_kb": 0, 00:34:15.332 "state": "online", 00:34:15.332 "raid_level": "raid1", 00:34:15.332 "superblock": false, 00:34:15.332 "num_base_bdevs": 2, 00:34:15.332 "num_base_bdevs_discovered": 2, 00:34:15.332 "num_base_bdevs_operational": 2, 00:34:15.332 "process": { 00:34:15.332 "type": "rebuild", 00:34:15.332 "target": "spare", 00:34:15.332 "progress": { 00:34:15.332 "blocks": 14336, 00:34:15.332 "percent": 21 00:34:15.332 } 00:34:15.332 }, 00:34:15.332 "base_bdevs_list": [ 00:34:15.332 { 00:34:15.332 "name": "spare", 00:34:15.332 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:15.332 "is_configured": true, 00:34:15.332 "data_offset": 0, 00:34:15.332 "data_size": 65536 00:34:15.332 }, 00:34:15.332 { 00:34:15.332 "name": "BaseBdev2", 00:34:15.332 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:15.332 "is_configured": true, 00:34:15.332 "data_offset": 0, 00:34:15.332 "data_size": 65536 00:34:15.332 } 00:34:15.332 ] 00:34:15.332 }' 00:34:15.332 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:15.591 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:15.591 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:15.591 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:15.591 14:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:15.850 [2024-10-09 14:03:22.297019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:34:15.850 [2024-10-09 14:03:22.297502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:34:16.110 143.50 IOPS, 430.50 MiB/s [2024-10-09T14:03:22.661Z] [2024-10-09 14:03:22.413272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:16.369 [2024-10-09 14:03:22.853163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:34:16.369 [2024-10-09 14:03:22.853376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:16.628 14:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.628 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:16.628 "name": "raid_bdev1", 00:34:16.628 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:16.628 "strip_size_kb": 0, 00:34:16.628 "state": "online", 00:34:16.628 "raid_level": "raid1", 00:34:16.628 "superblock": false, 00:34:16.628 "num_base_bdevs": 2, 00:34:16.628 "num_base_bdevs_discovered": 2, 00:34:16.628 "num_base_bdevs_operational": 2, 00:34:16.628 "process": { 00:34:16.628 "type": "rebuild", 00:34:16.628 "target": "spare", 00:34:16.628 "progress": { 00:34:16.628 "blocks": 30720, 00:34:16.628 "percent": 46 00:34:16.628 } 00:34:16.628 }, 00:34:16.628 "base_bdevs_list": [ 00:34:16.628 { 00:34:16.628 "name": "spare", 00:34:16.628 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:16.628 "is_configured": true, 00:34:16.628 "data_offset": 0, 00:34:16.628 "data_size": 65536 00:34:16.628 }, 00:34:16.629 { 00:34:16.629 "name": "BaseBdev2", 00:34:16.629 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:16.629 "is_configured": true, 00:34:16.629 "data_offset": 0, 00:34:16.629 "data_size": 65536 00:34:16.629 } 00:34:16.629 ] 00:34:16.629 }' 00:34:16.629 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:16.629 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:16.629 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:16.629 [2024-10-09 14:03:23.095663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:34:16.629 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:16.629 14:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:16.888 [2024-10-09 14:03:23.217778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:34:17.147 124.60 IOPS, 373.80 MiB/s [2024-10-09T14:03:23.698Z] [2024-10-09 14:03:23.688917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:34:17.716 [2024-10-09 14:03:24.027401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:17.716 "name": "raid_bdev1", 00:34:17.716 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:17.716 "strip_size_kb": 0, 00:34:17.716 "state": "online", 00:34:17.716 "raid_level": "raid1", 00:34:17.716 "superblock": false, 00:34:17.716 "num_base_bdevs": 2, 00:34:17.716 "num_base_bdevs_discovered": 2, 00:34:17.716 "num_base_bdevs_operational": 2, 00:34:17.716 "process": { 00:34:17.716 "type": "rebuild", 00:34:17.716 "target": "spare", 00:34:17.716 "progress": { 00:34:17.716 "blocks": 47104, 00:34:17.716 "percent": 71 00:34:17.716 } 00:34:17.716 }, 00:34:17.716 "base_bdevs_list": [ 00:34:17.716 { 00:34:17.716 "name": "spare", 00:34:17.716 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:17.716 "is_configured": true, 00:34:17.716 "data_offset": 0, 00:34:17.716 "data_size": 65536 00:34:17.716 }, 00:34:17.716 { 00:34:17.716 "name": "BaseBdev2", 00:34:17.716 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:17.716 "is_configured": true, 00:34:17.716 "data_offset": 0, 00:34:17.716 "data_size": 65536 00:34:17.716 } 00:34:17.716 ] 00:34:17.716 }' 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:17.716 14:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:17.976 111.83 IOPS, 335.50 MiB/s [2024-10-09T14:03:24.527Z] [2024-10-09 14:03:24.357222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:34:18.544 [2024-10-09 14:03:25.016564] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:18.803 [2024-10-09 14:03:25.121987] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:18.803 [2024-10-09 14:03:25.123593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:18.803 "name": "raid_bdev1", 00:34:18.803 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:18.803 "strip_size_kb": 0, 00:34:18.803 "state": "online", 00:34:18.803 "raid_level": "raid1", 00:34:18.803 "superblock": false, 00:34:18.803 "num_base_bdevs": 2, 00:34:18.803 "num_base_bdevs_discovered": 2, 00:34:18.803 "num_base_bdevs_operational": 2, 00:34:18.803 "base_bdevs_list": [ 00:34:18.803 { 00:34:18.803 "name": "spare", 00:34:18.803 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:18.803 "is_configured": true, 00:34:18.803 "data_offset": 0, 00:34:18.803 "data_size": 65536 00:34:18.803 }, 00:34:18.803 { 00:34:18.803 "name": "BaseBdev2", 00:34:18.803 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:18.803 "is_configured": true, 00:34:18.803 "data_offset": 0, 00:34:18.803 "data_size": 65536 00:34:18.803 } 00:34:18.803 ] 00:34:18.803 }' 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:18.803 101.29 IOPS, 303.86 MiB/s [2024-10-09T14:03:25.354Z] 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:18.803 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:19.063 "name": "raid_bdev1", 00:34:19.063 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:19.063 "strip_size_kb": 0, 00:34:19.063 "state": "online", 00:34:19.063 "raid_level": "raid1", 00:34:19.063 "superblock": false, 00:34:19.063 "num_base_bdevs": 2, 00:34:19.063 "num_base_bdevs_discovered": 2, 00:34:19.063 "num_base_bdevs_operational": 2, 00:34:19.063 "base_bdevs_list": [ 00:34:19.063 { 00:34:19.063 "name": "spare", 00:34:19.063 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:19.063 "is_configured": true, 00:34:19.063 "data_offset": 0, 00:34:19.063 "data_size": 65536 00:34:19.063 }, 00:34:19.063 { 00:34:19.063 "name": "BaseBdev2", 00:34:19.063 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:19.063 "is_configured": true, 00:34:19.063 "data_offset": 0, 00:34:19.063 "data_size": 65536 00:34:19.063 } 00:34:19.063 ] 00:34:19.063 }' 00:34:19.063 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.064 "name": "raid_bdev1", 00:34:19.064 "uuid": "d37bdd3d-3fc2-431e-985d-5262216ba302", 00:34:19.064 "strip_size_kb": 0, 00:34:19.064 "state": "online", 00:34:19.064 "raid_level": "raid1", 00:34:19.064 "superblock": false, 00:34:19.064 "num_base_bdevs": 2, 00:34:19.064 "num_base_bdevs_discovered": 2, 00:34:19.064 "num_base_bdevs_operational": 2, 00:34:19.064 "base_bdevs_list": [ 00:34:19.064 { 00:34:19.064 "name": "spare", 00:34:19.064 "uuid": "a791780e-eed3-543c-bfb7-87a90281a0a2", 00:34:19.064 "is_configured": true, 00:34:19.064 "data_offset": 0, 00:34:19.064 "data_size": 65536 00:34:19.064 }, 00:34:19.064 { 00:34:19.064 "name": "BaseBdev2", 00:34:19.064 "uuid": "dd8cea6d-6168-51c9-817d-a6d4ab0a6019", 00:34:19.064 "is_configured": true, 00:34:19.064 "data_offset": 0, 00:34:19.064 "data_size": 65536 00:34:19.064 } 00:34:19.064 ] 00:34:19.064 }' 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.064 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.632 14:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:19.632 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.632 14:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.632 [2024-10-09 14:03:26.002726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:19.632 [2024-10-09 14:03:26.002756] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:19.632 00:34:19.632 Latency(us) 00:34:19.632 [2024-10-09T14:03:26.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:19.632 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:34:19.632 raid_bdev1 : 7.79 94.31 282.92 0.00 0.00 14585.53 280.87 118838.61 00:34:19.632 [2024-10-09T14:03:26.183Z] =================================================================================================================== 00:34:19.632 [2024-10-09T14:03:26.183Z] Total : 94.31 282.92 0.00 0.00 14585.53 280.87 118838.61 00:34:19.632 [2024-10-09 14:03:26.090316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.632 [2024-10-09 14:03:26.090467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:19.632 [2024-10-09 14:03:26.090593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:19.632 [2024-10-09 14:03:26.090828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:34:19.632 { 00:34:19.632 "results": [ 00:34:19.632 { 00:34:19.632 "job": "raid_bdev1", 00:34:19.632 "core_mask": "0x1", 00:34:19.632 "workload": "randrw", 00:34:19.632 "percentage": 50, 00:34:19.632 "status": "finished", 00:34:19.632 "queue_depth": 2, 00:34:19.632 "io_size": 3145728, 00:34:19.632 "runtime": 7.793642, 00:34:19.632 "iops": 94.30764204976313, 00:34:19.632 "mibps": 282.9229261492894, 00:34:19.632 "io_failed": 0, 00:34:19.632 "io_timeout": 0, 00:34:19.632 "avg_latency_us": 14585.526940071266, 00:34:19.632 "min_latency_us": 280.86857142857144, 00:34:19.632 "max_latency_us": 118838.61333333333 00:34:19.632 } 00:34:19.632 ], 00:34:19.632 "core_count": 1 00:34:19.632 } 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:19.632 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:34:19.891 /dev/nbd0 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:20.150 1+0 records in 00:34:20.150 1+0 records out 00:34:20.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063321 s, 6.5 MB/s 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:20.150 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:34:20.409 /dev/nbd1 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:20.409 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:20.410 1+0 records in 00:34:20.410 1+0 records out 00:34:20.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214842 s, 19.1 MB/s 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:20.410 14:03:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:20.668 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87541 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87541 ']' 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87541 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87541 00:34:20.927 killing process with pid 87541 00:34:20.927 Received shutdown signal, test time was about 9.102487 seconds 00:34:20.927 00:34:20.927 Latency(us) 00:34:20.927 [2024-10-09T14:03:27.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.927 [2024-10-09T14:03:27.478Z] =================================================================================================================== 00:34:20.927 [2024-10-09T14:03:27.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:20.927 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87541' 00:34:20.928 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87541 00:34:20.928 [2024-10-09 14:03:27.395525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:20.928 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87541 00:34:20.928 [2024-10-09 14:03:27.422023] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:21.186 14:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:34:21.186 00:34:21.186 real 0m11.209s 00:34:21.186 user 0m14.447s 00:34:21.186 sys 0m1.715s 00:34:21.186 ************************************ 00:34:21.186 END TEST raid_rebuild_test_io 00:34:21.186 ************************************ 00:34:21.186 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:21.186 14:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:34:21.186 14:03:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:34:21.186 14:03:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:21.186 14:03:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:21.186 14:03:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:21.445 ************************************ 00:34:21.445 START TEST raid_rebuild_test_sb_io 00:34:21.445 ************************************ 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87906 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87906 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87906 ']' 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:21.445 14:03:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:21.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:21.445 Zero copy mechanism will not be used. 00:34:21.445 [2024-10-09 14:03:27.858341] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:21.445 [2024-10-09 14:03:27.858522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87906 ] 00:34:21.704 [2024-10-09 14:03:28.036197] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:21.704 [2024-10-09 14:03:28.079609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.704 [2024-10-09 14:03:28.123130] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:21.704 [2024-10-09 14:03:28.123165] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.271 BaseBdev1_malloc 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.271 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.271 [2024-10-09 14:03:28.815662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:22.271 [2024-10-09 14:03:28.815724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.271 [2024-10-09 14:03:28.815754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:22.271 [2024-10-09 14:03:28.815779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.271 [2024-10-09 14:03:28.818340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.271 [2024-10-09 14:03:28.818498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:22.530 BaseBdev1 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.530 BaseBdev2_malloc 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.530 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.530 [2024-10-09 14:03:28.848716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:22.530 [2024-10-09 14:03:28.848770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.531 [2024-10-09 14:03:28.848796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:22.531 [2024-10-09 14:03:28.848808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.531 [2024-10-09 14:03:28.851380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.531 [2024-10-09 14:03:28.851420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:22.531 BaseBdev2 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.531 spare_malloc 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.531 spare_delay 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.531 [2024-10-09 14:03:28.889760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:22.531 [2024-10-09 14:03:28.889812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:22.531 [2024-10-09 14:03:28.889837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:34:22.531 [2024-10-09 14:03:28.889847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:22.531 [2024-10-09 14:03:28.892336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:22.531 [2024-10-09 14:03:28.892375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:22.531 spare 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.531 [2024-10-09 14:03:28.897806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:22.531 [2024-10-09 14:03:28.900075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:22.531 [2024-10-09 14:03:28.900244] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:34:22.531 [2024-10-09 14:03:28.900259] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:22.531 [2024-10-09 14:03:28.900500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:22.531 [2024-10-09 14:03:28.900665] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:34:22.531 [2024-10-09 14:03:28.900679] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:34:22.531 [2024-10-09 14:03:28.900809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.531 "name": "raid_bdev1", 00:34:22.531 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:22.531 "strip_size_kb": 0, 00:34:22.531 "state": "online", 00:34:22.531 "raid_level": "raid1", 00:34:22.531 "superblock": true, 00:34:22.531 "num_base_bdevs": 2, 00:34:22.531 "num_base_bdevs_discovered": 2, 00:34:22.531 "num_base_bdevs_operational": 2, 00:34:22.531 "base_bdevs_list": [ 00:34:22.531 { 00:34:22.531 "name": "BaseBdev1", 00:34:22.531 "uuid": "14366ee4-8949-5579-bb5f-dabfe11b1dcc", 00:34:22.531 "is_configured": true, 00:34:22.531 "data_offset": 2048, 00:34:22.531 "data_size": 63488 00:34:22.531 }, 00:34:22.531 { 00:34:22.531 "name": "BaseBdev2", 00:34:22.531 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:22.531 "is_configured": true, 00:34:22.531 "data_offset": 2048, 00:34:22.531 "data_size": 63488 00:34:22.531 } 00:34:22.531 ] 00:34:22.531 }' 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.531 14:03:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 [2024-10-09 14:03:29.362160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 [2024-10-09 14:03:29.441877] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:23.098 "name": "raid_bdev1", 00:34:23.098 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:23.098 "strip_size_kb": 0, 00:34:23.098 "state": "online", 00:34:23.098 "raid_level": "raid1", 00:34:23.098 "superblock": true, 00:34:23.098 "num_base_bdevs": 2, 00:34:23.098 "num_base_bdevs_discovered": 1, 00:34:23.098 "num_base_bdevs_operational": 1, 00:34:23.098 "base_bdevs_list": [ 00:34:23.098 { 00:34:23.098 "name": null, 00:34:23.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.098 "is_configured": false, 00:34:23.098 "data_offset": 0, 00:34:23.098 "data_size": 63488 00:34:23.098 }, 00:34:23.098 { 00:34:23.098 "name": "BaseBdev2", 00:34:23.098 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:23.098 "is_configured": true, 00:34:23.098 "data_offset": 2048, 00:34:23.098 "data_size": 63488 00:34:23.098 } 00:34:23.098 ] 00:34:23.098 }' 00:34:23.098 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:23.099 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.099 [2024-10-09 14:03:29.555921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:23.099 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:23.099 Zero copy mechanism will not be used. 00:34:23.099 Running I/O for 60 seconds... 00:34:23.358 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:23.358 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.358 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:23.358 [2024-10-09 14:03:29.894969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:23.616 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.616 14:03:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:23.616 [2024-10-09 14:03:29.927461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:23.616 [2024-10-09 14:03:29.929788] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:23.616 [2024-10-09 14:03:30.163856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:23.616 [2024-10-09 14:03:30.164118] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:24.184 [2024-10-09 14:03:30.500489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:24.185 [2024-10-09 14:03:30.500906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:24.185 219.00 IOPS, 657.00 MiB/s [2024-10-09T14:03:30.736Z] [2024-10-09 14:03:30.733409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:24.185 [2024-10-09 14:03:30.733711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:24.445 "name": "raid_bdev1", 00:34:24.445 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:24.445 "strip_size_kb": 0, 00:34:24.445 "state": "online", 00:34:24.445 "raid_level": "raid1", 00:34:24.445 "superblock": true, 00:34:24.445 "num_base_bdevs": 2, 00:34:24.445 "num_base_bdevs_discovered": 2, 00:34:24.445 "num_base_bdevs_operational": 2, 00:34:24.445 "process": { 00:34:24.445 "type": "rebuild", 00:34:24.445 "target": "spare", 00:34:24.445 "progress": { 00:34:24.445 "blocks": 10240, 00:34:24.445 "percent": 16 00:34:24.445 } 00:34:24.445 }, 00:34:24.445 "base_bdevs_list": [ 00:34:24.445 { 00:34:24.445 "name": "spare", 00:34:24.445 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:24.445 "is_configured": true, 00:34:24.445 "data_offset": 2048, 00:34:24.445 "data_size": 63488 00:34:24.445 }, 00:34:24.445 { 00:34:24.445 "name": "BaseBdev2", 00:34:24.445 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:24.445 "is_configured": true, 00:34:24.445 "data_offset": 2048, 00:34:24.445 "data_size": 63488 00:34:24.445 } 00:34:24.445 ] 00:34:24.445 }' 00:34:24.445 14:03:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:24.704 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.704 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:24.704 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:24.704 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:24.704 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.705 [2024-10-09 14:03:31.070613] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:24.705 [2024-10-09 14:03:31.085936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:24.705 [2024-10-09 14:03:31.187066] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:24.705 [2024-10-09 14:03:31.194523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:24.705 [2024-10-09 14:03:31.194555] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:24.705 [2024-10-09 14:03:31.194582] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:24.705 [2024-10-09 14:03:31.223486] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.705 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:24.963 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.963 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:24.963 "name": "raid_bdev1", 00:34:24.963 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:24.963 "strip_size_kb": 0, 00:34:24.963 "state": "online", 00:34:24.963 "raid_level": "raid1", 00:34:24.963 "superblock": true, 00:34:24.963 "num_base_bdevs": 2, 00:34:24.963 "num_base_bdevs_discovered": 1, 00:34:24.963 "num_base_bdevs_operational": 1, 00:34:24.963 "base_bdevs_list": [ 00:34:24.963 { 00:34:24.963 "name": null, 00:34:24.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:24.963 "is_configured": false, 00:34:24.963 "data_offset": 0, 00:34:24.963 "data_size": 63488 00:34:24.963 }, 00:34:24.963 { 00:34:24.963 "name": "BaseBdev2", 00:34:24.963 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:24.963 "is_configured": true, 00:34:24.963 "data_offset": 2048, 00:34:24.963 "data_size": 63488 00:34:24.963 } 00:34:24.963 ] 00:34:24.963 }' 00:34:24.964 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:24.964 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:25.222 197.50 IOPS, 592.50 MiB/s [2024-10-09T14:03:31.773Z] 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.222 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:25.223 "name": "raid_bdev1", 00:34:25.223 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:25.223 "strip_size_kb": 0, 00:34:25.223 "state": "online", 00:34:25.223 "raid_level": "raid1", 00:34:25.223 "superblock": true, 00:34:25.223 "num_base_bdevs": 2, 00:34:25.223 "num_base_bdevs_discovered": 1, 00:34:25.223 "num_base_bdevs_operational": 1, 00:34:25.223 "base_bdevs_list": [ 00:34:25.223 { 00:34:25.223 "name": null, 00:34:25.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:25.223 "is_configured": false, 00:34:25.223 "data_offset": 0, 00:34:25.223 "data_size": 63488 00:34:25.223 }, 00:34:25.223 { 00:34:25.223 "name": "BaseBdev2", 00:34:25.223 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:25.223 "is_configured": true, 00:34:25.223 "data_offset": 2048, 00:34:25.223 "data_size": 63488 00:34:25.223 } 00:34:25.223 ] 00:34:25.223 }' 00:34:25.223 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:25.482 [2024-10-09 14:03:31.843953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.482 14:03:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:25.482 [2024-10-09 14:03:31.887742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:25.482 [2024-10-09 14:03:31.890034] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:25.482 [2024-10-09 14:03:32.003031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:25.482 [2024-10-09 14:03:32.003537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:34:25.741 [2024-10-09 14:03:32.217536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:25.741 [2024-10-09 14:03:32.218049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:34:26.308 [2024-10-09 14:03:32.559412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:26.308 [2024-10-09 14:03:32.560058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:34:26.308 185.33 IOPS, 556.00 MiB/s [2024-10-09T14:03:32.859Z] [2024-10-09 14:03:32.780337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:26.308 [2024-10-09 14:03:32.780894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.567 "name": "raid_bdev1", 00:34:26.567 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:26.567 "strip_size_kb": 0, 00:34:26.567 "state": "online", 00:34:26.567 "raid_level": "raid1", 00:34:26.567 "superblock": true, 00:34:26.567 "num_base_bdevs": 2, 00:34:26.567 "num_base_bdevs_discovered": 2, 00:34:26.567 "num_base_bdevs_operational": 2, 00:34:26.567 "process": { 00:34:26.567 "type": "rebuild", 00:34:26.567 "target": "spare", 00:34:26.567 "progress": { 00:34:26.567 "blocks": 10240, 00:34:26.567 "percent": 16 00:34:26.567 } 00:34:26.567 }, 00:34:26.567 "base_bdevs_list": [ 00:34:26.567 { 00:34:26.567 "name": "spare", 00:34:26.567 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:26.567 "is_configured": true, 00:34:26.567 "data_offset": 2048, 00:34:26.567 "data_size": 63488 00:34:26.567 }, 00:34:26.567 { 00:34:26.567 "name": "BaseBdev2", 00:34:26.567 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:26.567 "is_configured": true, 00:34:26.567 "data_offset": 2048, 00:34:26.567 "data_size": 63488 00:34:26.567 } 00:34:26.567 ] 00:34:26.567 }' 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.567 14:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:26.567 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=347 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.567 "name": "raid_bdev1", 00:34:26.567 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:26.567 "strip_size_kb": 0, 00:34:26.567 "state": "online", 00:34:26.567 "raid_level": "raid1", 00:34:26.567 "superblock": true, 00:34:26.567 "num_base_bdevs": 2, 00:34:26.567 "num_base_bdevs_discovered": 2, 00:34:26.567 "num_base_bdevs_operational": 2, 00:34:26.567 "process": { 00:34:26.567 "type": "rebuild", 00:34:26.567 "target": "spare", 00:34:26.567 "progress": { 00:34:26.567 "blocks": 12288, 00:34:26.567 "percent": 19 00:34:26.567 } 00:34:26.567 }, 00:34:26.567 "base_bdevs_list": [ 00:34:26.567 { 00:34:26.567 "name": "spare", 00:34:26.567 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:26.567 "is_configured": true, 00:34:26.567 "data_offset": 2048, 00:34:26.567 "data_size": 63488 00:34:26.567 }, 00:34:26.567 { 00:34:26.567 "name": "BaseBdev2", 00:34:26.567 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:26.567 "is_configured": true, 00:34:26.567 "data_offset": 2048, 00:34:26.567 "data_size": 63488 00:34:26.567 } 00:34:26.567 ] 00:34:26.567 }' 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.567 [2024-10-09 14:03:33.110563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:34:26.567 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.826 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.826 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.826 14:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:26.826 [2024-10-09 14:03:33.237373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:34:27.084 [2024-10-09 14:03:33.566003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:34:27.343 167.25 IOPS, 501.75 MiB/s [2024-10-09T14:03:33.894Z] [2024-10-09 14:03:33.805187] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:27.343 [2024-10-09 14:03:33.805584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:34:27.910 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:27.910 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:27.910 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.911 "name": "raid_bdev1", 00:34:27.911 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:27.911 "strip_size_kb": 0, 00:34:27.911 "state": "online", 00:34:27.911 "raid_level": "raid1", 00:34:27.911 "superblock": true, 00:34:27.911 "num_base_bdevs": 2, 00:34:27.911 "num_base_bdevs_discovered": 2, 00:34:27.911 "num_base_bdevs_operational": 2, 00:34:27.911 "process": { 00:34:27.911 "type": "rebuild", 00:34:27.911 "target": "spare", 00:34:27.911 "progress": { 00:34:27.911 "blocks": 26624, 00:34:27.911 "percent": 41 00:34:27.911 } 00:34:27.911 }, 00:34:27.911 "base_bdevs_list": [ 00:34:27.911 { 00:34:27.911 "name": "spare", 00:34:27.911 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:27.911 "is_configured": true, 00:34:27.911 "data_offset": 2048, 00:34:27.911 "data_size": 63488 00:34:27.911 }, 00:34:27.911 { 00:34:27.911 "name": "BaseBdev2", 00:34:27.911 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:27.911 "is_configured": true, 00:34:27.911 "data_offset": 2048, 00:34:27.911 "data_size": 63488 00:34:27.911 } 00:34:27.911 ] 00:34:27.911 }' 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:27.911 14:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:28.169 [2024-10-09 14:03:34.464403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:34:28.169 143.00 IOPS, 429.00 MiB/s [2024-10-09T14:03:34.720Z] [2024-10-09 14:03:34.578633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:34:28.428 [2024-10-09 14:03:34.805823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.038 [2024-10-09 14:03:35.364435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:29.038 "name": "raid_bdev1", 00:34:29.038 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:29.038 "strip_size_kb": 0, 00:34:29.038 "state": "online", 00:34:29.038 "raid_level": "raid1", 00:34:29.038 "superblock": true, 00:34:29.038 "num_base_bdevs": 2, 00:34:29.038 "num_base_bdevs_discovered": 2, 00:34:29.038 "num_base_bdevs_operational": 2, 00:34:29.038 "process": { 00:34:29.038 "type": "rebuild", 00:34:29.038 "target": "spare", 00:34:29.038 "progress": { 00:34:29.038 "blocks": 45056, 00:34:29.038 "percent": 70 00:34:29.038 } 00:34:29.038 }, 00:34:29.038 "base_bdevs_list": [ 00:34:29.038 { 00:34:29.038 "name": "spare", 00:34:29.038 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:29.038 "is_configured": true, 00:34:29.038 "data_offset": 2048, 00:34:29.038 "data_size": 63488 00:34:29.038 }, 00:34:29.038 { 00:34:29.038 "name": "BaseBdev2", 00:34:29.038 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:29.038 "is_configured": true, 00:34:29.038 "data_offset": 2048, 00:34:29.038 "data_size": 63488 00:34:29.038 } 00:34:29.038 ] 00:34:29.038 }' 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:29.038 14:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:29.321 126.00 IOPS, 378.00 MiB/s [2024-10-09T14:03:35.872Z] [2024-10-09 14:03:35.806537] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:34:29.579 [2024-10-09 14:03:36.021746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:34:29.837 [2024-10-09 14:03:36.350940] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:30.096 [2024-10-09 14:03:36.450829] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:30.096 [2024-10-09 14:03:36.458619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.096 "name": "raid_bdev1", 00:34:30.096 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:30.096 "strip_size_kb": 0, 00:34:30.096 "state": "online", 00:34:30.096 "raid_level": "raid1", 00:34:30.096 "superblock": true, 00:34:30.096 "num_base_bdevs": 2, 00:34:30.096 "num_base_bdevs_discovered": 2, 00:34:30.096 "num_base_bdevs_operational": 2, 00:34:30.096 "base_bdevs_list": [ 00:34:30.096 { 00:34:30.096 "name": "spare", 00:34:30.096 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:30.096 "is_configured": true, 00:34:30.096 "data_offset": 2048, 00:34:30.096 "data_size": 63488 00:34:30.096 }, 00:34:30.096 { 00:34:30.096 "name": "BaseBdev2", 00:34:30.096 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:30.096 "is_configured": true, 00:34:30.096 "data_offset": 2048, 00:34:30.096 "data_size": 63488 00:34:30.096 } 00:34:30.096 ] 00:34:30.096 }' 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.096 113.43 IOPS, 340.29 MiB/s [2024-10-09T14:03:36.647Z] 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.096 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.355 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:30.355 "name": "raid_bdev1", 00:34:30.355 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:30.355 "strip_size_kb": 0, 00:34:30.355 "state": "online", 00:34:30.355 "raid_level": "raid1", 00:34:30.355 "superblock": true, 00:34:30.356 "num_base_bdevs": 2, 00:34:30.356 "num_base_bdevs_discovered": 2, 00:34:30.356 "num_base_bdevs_operational": 2, 00:34:30.356 "base_bdevs_list": [ 00:34:30.356 { 00:34:30.356 "name": "spare", 00:34:30.356 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:30.356 "is_configured": true, 00:34:30.356 "data_offset": 2048, 00:34:30.356 "data_size": 63488 00:34:30.356 }, 00:34:30.356 { 00:34:30.356 "name": "BaseBdev2", 00:34:30.356 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:30.356 "is_configured": true, 00:34:30.356 "data_offset": 2048, 00:34:30.356 "data_size": 63488 00:34:30.356 } 00:34:30.356 ] 00:34:30.356 }' 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.356 "name": "raid_bdev1", 00:34:30.356 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:30.356 "strip_size_kb": 0, 00:34:30.356 "state": "online", 00:34:30.356 "raid_level": "raid1", 00:34:30.356 "superblock": true, 00:34:30.356 "num_base_bdevs": 2, 00:34:30.356 "num_base_bdevs_discovered": 2, 00:34:30.356 "num_base_bdevs_operational": 2, 00:34:30.356 "base_bdevs_list": [ 00:34:30.356 { 00:34:30.356 "name": "spare", 00:34:30.356 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:30.356 "is_configured": true, 00:34:30.356 "data_offset": 2048, 00:34:30.356 "data_size": 63488 00:34:30.356 }, 00:34:30.356 { 00:34:30.356 "name": "BaseBdev2", 00:34:30.356 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:30.356 "is_configured": true, 00:34:30.356 "data_offset": 2048, 00:34:30.356 "data_size": 63488 00:34:30.356 } 00:34:30.356 ] 00:34:30.356 }' 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.356 14:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 [2024-10-09 14:03:37.204503] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:30.923 [2024-10-09 14:03:37.204542] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:30.923 00:34:30.923 Latency(us) 00:34:30.923 [2024-10-09T14:03:37.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.923 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:34:30.923 raid_bdev1 : 7.66 106.30 318.89 0.00 0.00 12544.46 278.92 113346.07 00:34:30.923 [2024-10-09T14:03:37.474Z] =================================================================================================================== 00:34:30.923 [2024-10-09T14:03:37.474Z] Total : 106.30 318.89 0.00 0.00 12544.46 278.92 113346.07 00:34:30.923 [2024-10-09 14:03:37.219990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:30.923 [2024-10-09 14:03:37.220178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:30.923 [2024-10-09 14:03:37.220297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:30.923 [2024-10-09 14:03:37.220413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, sta{ 00:34:30.923 "results": [ 00:34:30.923 { 00:34:30.923 "job": "raid_bdev1", 00:34:30.923 "core_mask": "0x1", 00:34:30.923 "workload": "randrw", 00:34:30.923 "percentage": 50, 00:34:30.923 "status": "finished", 00:34:30.923 "queue_depth": 2, 00:34:30.923 "io_size": 3145728, 00:34:30.923 "runtime": 7.657926, 00:34:30.923 "iops": 106.29509869904723, 00:34:30.923 "mibps": 318.8852960971417, 00:34:30.923 "io_failed": 0, 00:34:30.923 "io_timeout": 0, 00:34:30.923 "avg_latency_us": 12544.464855504855, 00:34:30.923 "min_latency_us": 278.91809523809525, 00:34:30.923 "max_latency_us": 113346.07238095238 00:34:30.923 } 00:34:30.923 ], 00:34:30.923 "core_count": 1 00:34:30.923 } 00:34:30.923 te offline 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:30.923 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:34:31.182 /dev/nbd0 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.182 1+0 records in 00:34:31.182 1+0 records out 00:34:31.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350031 s, 11.7 MB/s 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:31.182 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:34:31.441 /dev/nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:31.441 1+0 records in 00:34:31.441 1+0 records out 00:34:31.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043355 s, 9.4 MB/s 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.441 14:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.009 [2024-10-09 14:03:38.520151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:32.009 [2024-10-09 14:03:38.520213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:32.009 [2024-10-09 14:03:38.520235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:32.009 [2024-10-09 14:03:38.520250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:32.009 [2024-10-09 14:03:38.522841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:32.009 [2024-10-09 14:03:38.523000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:32.009 [2024-10-09 14:03:38.523102] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:32.009 [2024-10-09 14:03:38.523153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:32.009 [2024-10-09 14:03:38.523280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:32.009 spare 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.009 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.268 [2024-10-09 14:03:38.623363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:34:32.268 [2024-10-09 14:03:38.623387] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:32.268 [2024-10-09 14:03:38.623689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:34:32.268 [2024-10-09 14:03:38.623823] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:34:32.268 [2024-10-09 14:03:38.623844] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:34:32.268 [2024-10-09 14:03:38.623975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.268 "name": "raid_bdev1", 00:34:32.268 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:32.268 "strip_size_kb": 0, 00:34:32.268 "state": "online", 00:34:32.268 "raid_level": "raid1", 00:34:32.268 "superblock": true, 00:34:32.268 "num_base_bdevs": 2, 00:34:32.268 "num_base_bdevs_discovered": 2, 00:34:32.268 "num_base_bdevs_operational": 2, 00:34:32.268 "base_bdevs_list": [ 00:34:32.268 { 00:34:32.268 "name": "spare", 00:34:32.268 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:32.268 "is_configured": true, 00:34:32.268 "data_offset": 2048, 00:34:32.268 "data_size": 63488 00:34:32.268 }, 00:34:32.268 { 00:34:32.268 "name": "BaseBdev2", 00:34:32.268 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:32.268 "is_configured": true, 00:34:32.268 "data_offset": 2048, 00:34:32.268 "data_size": 63488 00:34:32.268 } 00:34:32.268 ] 00:34:32.268 }' 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.268 14:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.527 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:32.786 "name": "raid_bdev1", 00:34:32.786 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:32.786 "strip_size_kb": 0, 00:34:32.786 "state": "online", 00:34:32.786 "raid_level": "raid1", 00:34:32.786 "superblock": true, 00:34:32.786 "num_base_bdevs": 2, 00:34:32.786 "num_base_bdevs_discovered": 2, 00:34:32.786 "num_base_bdevs_operational": 2, 00:34:32.786 "base_bdevs_list": [ 00:34:32.786 { 00:34:32.786 "name": "spare", 00:34:32.786 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:32.786 "is_configured": true, 00:34:32.786 "data_offset": 2048, 00:34:32.786 "data_size": 63488 00:34:32.786 }, 00:34:32.786 { 00:34:32.786 "name": "BaseBdev2", 00:34:32.786 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:32.786 "is_configured": true, 00:34:32.786 "data_offset": 2048, 00:34:32.786 "data_size": 63488 00:34:32.786 } 00:34:32.786 ] 00:34:32.786 }' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.786 [2024-10-09 14:03:39.232394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:32.786 "name": "raid_bdev1", 00:34:32.786 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:32.786 "strip_size_kb": 0, 00:34:32.786 "state": "online", 00:34:32.786 "raid_level": "raid1", 00:34:32.786 "superblock": true, 00:34:32.786 "num_base_bdevs": 2, 00:34:32.786 "num_base_bdevs_discovered": 1, 00:34:32.786 "num_base_bdevs_operational": 1, 00:34:32.786 "base_bdevs_list": [ 00:34:32.786 { 00:34:32.786 "name": null, 00:34:32.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.786 "is_configured": false, 00:34:32.786 "data_offset": 0, 00:34:32.786 "data_size": 63488 00:34:32.786 }, 00:34:32.786 { 00:34:32.786 "name": "BaseBdev2", 00:34:32.786 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:32.786 "is_configured": true, 00:34:32.786 "data_offset": 2048, 00:34:32.786 "data_size": 63488 00:34:32.786 } 00:34:32.786 ] 00:34:32.786 }' 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:32.786 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:33.354 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:33.354 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.354 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:33.354 [2024-10-09 14:03:39.696580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.354 [2024-10-09 14:03:39.696755] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:33.354 [2024-10-09 14:03:39.696772] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:33.354 [2024-10-09 14:03:39.696808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.354 [2024-10-09 14:03:39.701371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:34:33.354 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.354 14:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:33.354 [2024-10-09 14:03:39.703629] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:34.289 "name": "raid_bdev1", 00:34:34.289 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:34.289 "strip_size_kb": 0, 00:34:34.289 "state": "online", 00:34:34.289 "raid_level": "raid1", 00:34:34.289 "superblock": true, 00:34:34.289 "num_base_bdevs": 2, 00:34:34.289 "num_base_bdevs_discovered": 2, 00:34:34.289 "num_base_bdevs_operational": 2, 00:34:34.289 "process": { 00:34:34.289 "type": "rebuild", 00:34:34.289 "target": "spare", 00:34:34.289 "progress": { 00:34:34.289 "blocks": 20480, 00:34:34.289 "percent": 32 00:34:34.289 } 00:34:34.289 }, 00:34:34.289 "base_bdevs_list": [ 00:34:34.289 { 00:34:34.289 "name": "spare", 00:34:34.289 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:34.289 "is_configured": true, 00:34:34.289 "data_offset": 2048, 00:34:34.289 "data_size": 63488 00:34:34.289 }, 00:34:34.289 { 00:34:34.289 "name": "BaseBdev2", 00:34:34.289 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:34.289 "is_configured": true, 00:34:34.289 "data_offset": 2048, 00:34:34.289 "data_size": 63488 00:34:34.289 } 00:34:34.289 ] 00:34:34.289 }' 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:34.289 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.547 [2024-10-09 14:03:40.854107] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.547 [2024-10-09 14:03:40.910058] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:34.547 [2024-10-09 14:03:40.910275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.547 [2024-10-09 14:03:40.910372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.547 [2024-10-09 14:03:40.910411] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.547 "name": "raid_bdev1", 00:34:34.547 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:34.547 "strip_size_kb": 0, 00:34:34.547 "state": "online", 00:34:34.547 "raid_level": "raid1", 00:34:34.547 "superblock": true, 00:34:34.547 "num_base_bdevs": 2, 00:34:34.547 "num_base_bdevs_discovered": 1, 00:34:34.547 "num_base_bdevs_operational": 1, 00:34:34.547 "base_bdevs_list": [ 00:34:34.547 { 00:34:34.547 "name": null, 00:34:34.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:34.547 "is_configured": false, 00:34:34.547 "data_offset": 0, 00:34:34.547 "data_size": 63488 00:34:34.547 }, 00:34:34.547 { 00:34:34.547 "name": "BaseBdev2", 00:34:34.547 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:34.547 "is_configured": true, 00:34:34.547 "data_offset": 2048, 00:34:34.547 "data_size": 63488 00:34:34.547 } 00:34:34.547 ] 00:34:34.547 }' 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.547 14:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:35.113 14:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:35.113 14:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.113 14:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:35.113 [2024-10-09 14:03:41.375019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:35.113 [2024-10-09 14:03:41.375081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:35.113 [2024-10-09 14:03:41.375110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:35.113 [2024-10-09 14:03:41.375121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:35.113 [2024-10-09 14:03:41.375575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:35.113 [2024-10-09 14:03:41.375596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:35.113 [2024-10-09 14:03:41.375681] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:35.113 [2024-10-09 14:03:41.375695] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:35.113 [2024-10-09 14:03:41.375708] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:35.113 [2024-10-09 14:03:41.375730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:35.113 [2024-10-09 14:03:41.379930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:34:35.113 spare 00:34:35.113 14:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.113 14:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:35.113 [2024-10-09 14:03:41.382295] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.046 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:36.046 "name": "raid_bdev1", 00:34:36.047 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:36.047 "strip_size_kb": 0, 00:34:36.047 "state": "online", 00:34:36.047 "raid_level": "raid1", 00:34:36.047 "superblock": true, 00:34:36.047 "num_base_bdevs": 2, 00:34:36.047 "num_base_bdevs_discovered": 2, 00:34:36.047 "num_base_bdevs_operational": 2, 00:34:36.047 "process": { 00:34:36.047 "type": "rebuild", 00:34:36.047 "target": "spare", 00:34:36.047 "progress": { 00:34:36.047 "blocks": 20480, 00:34:36.047 "percent": 32 00:34:36.047 } 00:34:36.047 }, 00:34:36.047 "base_bdevs_list": [ 00:34:36.047 { 00:34:36.047 "name": "spare", 00:34:36.047 "uuid": "316bd22f-33a2-5fe0-b382-68e909930a35", 00:34:36.047 "is_configured": true, 00:34:36.047 "data_offset": 2048, 00:34:36.047 "data_size": 63488 00:34:36.047 }, 00:34:36.047 { 00:34:36.047 "name": "BaseBdev2", 00:34:36.047 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:36.047 "is_configured": true, 00:34:36.047 "data_offset": 2048, 00:34:36.047 "data_size": 63488 00:34:36.047 } 00:34:36.047 ] 00:34:36.047 }' 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.047 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.047 [2024-10-09 14:03:42.536264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.047 [2024-10-09 14:03:42.588650] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:36.047 [2024-10-09 14:03:42.588716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:36.047 [2024-10-09 14:03:42.588734] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:36.047 [2024-10-09 14:03:42.588745] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:36.305 "name": "raid_bdev1", 00:34:36.305 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:36.305 "strip_size_kb": 0, 00:34:36.305 "state": "online", 00:34:36.305 "raid_level": "raid1", 00:34:36.305 "superblock": true, 00:34:36.305 "num_base_bdevs": 2, 00:34:36.305 "num_base_bdevs_discovered": 1, 00:34:36.305 "num_base_bdevs_operational": 1, 00:34:36.305 "base_bdevs_list": [ 00:34:36.305 { 00:34:36.305 "name": null, 00:34:36.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.305 "is_configured": false, 00:34:36.305 "data_offset": 0, 00:34:36.305 "data_size": 63488 00:34:36.305 }, 00:34:36.305 { 00:34:36.305 "name": "BaseBdev2", 00:34:36.305 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:36.305 "is_configured": true, 00:34:36.305 "data_offset": 2048, 00:34:36.305 "data_size": 63488 00:34:36.305 } 00:34:36.305 ] 00:34:36.305 }' 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:36.305 14:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:36.564 "name": "raid_bdev1", 00:34:36.564 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:36.564 "strip_size_kb": 0, 00:34:36.564 "state": "online", 00:34:36.564 "raid_level": "raid1", 00:34:36.564 "superblock": true, 00:34:36.564 "num_base_bdevs": 2, 00:34:36.564 "num_base_bdevs_discovered": 1, 00:34:36.564 "num_base_bdevs_operational": 1, 00:34:36.564 "base_bdevs_list": [ 00:34:36.564 { 00:34:36.564 "name": null, 00:34:36.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.564 "is_configured": false, 00:34:36.564 "data_offset": 0, 00:34:36.564 "data_size": 63488 00:34:36.564 }, 00:34:36.564 { 00:34:36.564 "name": "BaseBdev2", 00:34:36.564 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:36.564 "is_configured": true, 00:34:36.564 "data_offset": 2048, 00:34:36.564 "data_size": 63488 00:34:36.564 } 00:34:36.564 ] 00:34:36.564 }' 00:34:36.564 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:36.823 [2024-10-09 14:03:43.201280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:36.823 [2024-10-09 14:03:43.201340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.823 [2024-10-09 14:03:43.201379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:36.823 [2024-10-09 14:03:43.201393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.823 [2024-10-09 14:03:43.201848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.823 [2024-10-09 14:03:43.201872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:36.823 [2024-10-09 14:03:43.201941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:36.823 [2024-10-09 14:03:43.201959] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:36.823 [2024-10-09 14:03:43.201972] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:36.823 [2024-10-09 14:03:43.201986] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:36.823 BaseBdev1 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.823 14:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:37.758 "name": "raid_bdev1", 00:34:37.758 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:37.758 "strip_size_kb": 0, 00:34:37.758 "state": "online", 00:34:37.758 "raid_level": "raid1", 00:34:37.758 "superblock": true, 00:34:37.758 "num_base_bdevs": 2, 00:34:37.758 "num_base_bdevs_discovered": 1, 00:34:37.758 "num_base_bdevs_operational": 1, 00:34:37.758 "base_bdevs_list": [ 00:34:37.758 { 00:34:37.758 "name": null, 00:34:37.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:37.758 "is_configured": false, 00:34:37.758 "data_offset": 0, 00:34:37.758 "data_size": 63488 00:34:37.758 }, 00:34:37.758 { 00:34:37.758 "name": "BaseBdev2", 00:34:37.758 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:37.758 "is_configured": true, 00:34:37.758 "data_offset": 2048, 00:34:37.758 "data_size": 63488 00:34:37.758 } 00:34:37.758 ] 00:34:37.758 }' 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:37.758 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:38.323 "name": "raid_bdev1", 00:34:38.323 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:38.323 "strip_size_kb": 0, 00:34:38.323 "state": "online", 00:34:38.323 "raid_level": "raid1", 00:34:38.323 "superblock": true, 00:34:38.323 "num_base_bdevs": 2, 00:34:38.323 "num_base_bdevs_discovered": 1, 00:34:38.323 "num_base_bdevs_operational": 1, 00:34:38.323 "base_bdevs_list": [ 00:34:38.323 { 00:34:38.323 "name": null, 00:34:38.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.323 "is_configured": false, 00:34:38.323 "data_offset": 0, 00:34:38.323 "data_size": 63488 00:34:38.323 }, 00:34:38.323 { 00:34:38.323 "name": "BaseBdev2", 00:34:38.323 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:38.323 "is_configured": true, 00:34:38.323 "data_offset": 2048, 00:34:38.323 "data_size": 63488 00:34:38.323 } 00:34:38.323 ] 00:34:38.323 }' 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:38.323 [2024-10-09 14:03:44.805897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:38.323 [2024-10-09 14:03:44.806194] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:38.323 [2024-10-09 14:03:44.806333] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:38.323 request: 00:34:38.323 { 00:34:38.323 "base_bdev": "BaseBdev1", 00:34:38.323 "raid_bdev": "raid_bdev1", 00:34:38.323 "method": "bdev_raid_add_base_bdev", 00:34:38.323 "req_id": 1 00:34:38.323 } 00:34:38.323 Got JSON-RPC error response 00:34:38.323 response: 00:34:38.323 { 00:34:38.323 "code": -22, 00:34:38.323 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:38.323 } 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:38.323 14:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:39.706 "name": "raid_bdev1", 00:34:39.706 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:39.706 "strip_size_kb": 0, 00:34:39.706 "state": "online", 00:34:39.706 "raid_level": "raid1", 00:34:39.706 "superblock": true, 00:34:39.706 "num_base_bdevs": 2, 00:34:39.706 "num_base_bdevs_discovered": 1, 00:34:39.706 "num_base_bdevs_operational": 1, 00:34:39.706 "base_bdevs_list": [ 00:34:39.706 { 00:34:39.706 "name": null, 00:34:39.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.706 "is_configured": false, 00:34:39.706 "data_offset": 0, 00:34:39.706 "data_size": 63488 00:34:39.706 }, 00:34:39.706 { 00:34:39.706 "name": "BaseBdev2", 00:34:39.706 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:39.706 "is_configured": true, 00:34:39.706 "data_offset": 2048, 00:34:39.706 "data_size": 63488 00:34:39.706 } 00:34:39.706 ] 00:34:39.706 }' 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:39.706 14:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:39.964 "name": "raid_bdev1", 00:34:39.964 "uuid": "ca656d2c-a321-4cb5-b2a9-887482322842", 00:34:39.964 "strip_size_kb": 0, 00:34:39.964 "state": "online", 00:34:39.964 "raid_level": "raid1", 00:34:39.964 "superblock": true, 00:34:39.964 "num_base_bdevs": 2, 00:34:39.964 "num_base_bdevs_discovered": 1, 00:34:39.964 "num_base_bdevs_operational": 1, 00:34:39.964 "base_bdevs_list": [ 00:34:39.964 { 00:34:39.964 "name": null, 00:34:39.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:39.964 "is_configured": false, 00:34:39.964 "data_offset": 0, 00:34:39.964 "data_size": 63488 00:34:39.964 }, 00:34:39.964 { 00:34:39.964 "name": "BaseBdev2", 00:34:39.964 "uuid": "17872983-f700-5adc-98cb-7b0112c923e4", 00:34:39.964 "is_configured": true, 00:34:39.964 "data_offset": 2048, 00:34:39.964 "data_size": 63488 00:34:39.964 } 00:34:39.964 ] 00:34:39.964 }' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87906 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87906 ']' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87906 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87906 00:34:39.964 killing process with pid 87906 00:34:39.964 Received shutdown signal, test time was about 16.885686 seconds 00:34:39.964 00:34:39.964 Latency(us) 00:34:39.964 [2024-10-09T14:03:46.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:39.964 [2024-10-09T14:03:46.515Z] =================================================================================================================== 00:34:39.964 [2024-10-09T14:03:46.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87906' 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87906 00:34:39.964 [2024-10-09 14:03:46.444236] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:39.964 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87906 00:34:39.964 [2024-10-09 14:03:46.444377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:39.964 [2024-10-09 14:03:46.444445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:39.964 [2024-10-09 14:03:46.444458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:34:39.964 [2024-10-09 14:03:46.471474] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:40.223 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:34:40.223 00:34:40.223 real 0m18.979s 00:34:40.223 user 0m25.451s 00:34:40.223 sys 0m2.349s 00:34:40.223 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:40.223 ************************************ 00:34:40.223 END TEST raid_rebuild_test_sb_io 00:34:40.223 ************************************ 00:34:40.223 14:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:34:40.482 14:03:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:34:40.482 14:03:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:34:40.482 14:03:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:40.482 14:03:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:40.482 14:03:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:40.482 ************************************ 00:34:40.482 START TEST raid_rebuild_test 00:34:40.482 ************************************ 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88578 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88578 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88578 ']' 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.482 14:03:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.482 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:40.482 Zero copy mechanism will not be used. 00:34:40.482 [2024-10-09 14:03:46.875772] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:40.483 [2024-10-09 14:03:46.875904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88578 ] 00:34:40.742 [2024-10-09 14:03:47.033130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.742 [2024-10-09 14:03:47.080189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:40.742 [2024-10-09 14:03:47.124685] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:40.742 [2024-10-09 14:03:47.124896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.742 BaseBdev1_malloc 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.742 [2024-10-09 14:03:47.193992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:40.742 [2024-10-09 14:03:47.194057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.742 [2024-10-09 14:03:47.194083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:40.742 [2024-10-09 14:03:47.194100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.742 [2024-10-09 14:03:47.196622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.742 [2024-10-09 14:03:47.196661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:40.742 BaseBdev1 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.742 BaseBdev2_malloc 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.742 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.742 [2024-10-09 14:03:47.227706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:40.743 [2024-10-09 14:03:47.227776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.743 [2024-10-09 14:03:47.227808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:40.743 [2024-10-09 14:03:47.227825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.743 [2024-10-09 14:03:47.230843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.743 [2024-10-09 14:03:47.230879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:40.743 BaseBdev2 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 BaseBdev3_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 [2024-10-09 14:03:47.248917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:40.743 [2024-10-09 14:03:47.248965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.743 [2024-10-09 14:03:47.248992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:40.743 [2024-10-09 14:03:47.249003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.743 [2024-10-09 14:03:47.251486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.743 [2024-10-09 14:03:47.251524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:40.743 BaseBdev3 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 BaseBdev4_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 [2024-10-09 14:03:47.270179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:40.743 [2024-10-09 14:03:47.270344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.743 [2024-10-09 14:03:47.270381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:40.743 [2024-10-09 14:03:47.270393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.743 [2024-10-09 14:03:47.272833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.743 [2024-10-09 14:03:47.272869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:40.743 BaseBdev4 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 spare_malloc 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.743 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.002 spare_delay 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.002 [2024-10-09 14:03:47.303448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:41.002 [2024-10-09 14:03:47.303502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:41.002 [2024-10-09 14:03:47.303528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:41.002 [2024-10-09 14:03:47.303539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:41.002 [2024-10-09 14:03:47.305969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:41.002 [2024-10-09 14:03:47.306007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:41.002 spare 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.002 [2024-10-09 14:03:47.315547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:41.002 [2024-10-09 14:03:47.317748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:41.002 [2024-10-09 14:03:47.317819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:41.002 [2024-10-09 14:03:47.317860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:41.002 [2024-10-09 14:03:47.317938] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:34:41.002 [2024-10-09 14:03:47.317949] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:34:41.002 [2024-10-09 14:03:47.318218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:41.002 [2024-10-09 14:03:47.318367] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:34:41.002 [2024-10-09 14:03:47.318381] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:34:41.002 [2024-10-09 14:03:47.318509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:41.002 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:41.003 "name": "raid_bdev1", 00:34:41.003 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:41.003 "strip_size_kb": 0, 00:34:41.003 "state": "online", 00:34:41.003 "raid_level": "raid1", 00:34:41.003 "superblock": false, 00:34:41.003 "num_base_bdevs": 4, 00:34:41.003 "num_base_bdevs_discovered": 4, 00:34:41.003 "num_base_bdevs_operational": 4, 00:34:41.003 "base_bdevs_list": [ 00:34:41.003 { 00:34:41.003 "name": "BaseBdev1", 00:34:41.003 "uuid": "0a7afe8c-8f52-5711-aa74-c40c06c633be", 00:34:41.003 "is_configured": true, 00:34:41.003 "data_offset": 0, 00:34:41.003 "data_size": 65536 00:34:41.003 }, 00:34:41.003 { 00:34:41.003 "name": "BaseBdev2", 00:34:41.003 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:41.003 "is_configured": true, 00:34:41.003 "data_offset": 0, 00:34:41.003 "data_size": 65536 00:34:41.003 }, 00:34:41.003 { 00:34:41.003 "name": "BaseBdev3", 00:34:41.003 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:41.003 "is_configured": true, 00:34:41.003 "data_offset": 0, 00:34:41.003 "data_size": 65536 00:34:41.003 }, 00:34:41.003 { 00:34:41.003 "name": "BaseBdev4", 00:34:41.003 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:41.003 "is_configured": true, 00:34:41.003 "data_offset": 0, 00:34:41.003 "data_size": 65536 00:34:41.003 } 00:34:41.003 ] 00:34:41.003 }' 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:41.003 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.261 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:41.261 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:41.261 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.261 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.261 [2024-10-09 14:03:47.779928] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:41.520 14:03:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:41.780 [2024-10-09 14:03:48.155771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:41.780 /dev/nbd0 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:41.780 1+0 records in 00:34:41.780 1+0 records out 00:34:41.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619713 s, 6.6 MB/s 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:41.780 14:03:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:34:48.344 65536+0 records in 00:34:48.344 65536+0 records out 00:34:48.344 33554432 bytes (34 MB, 32 MiB) copied, 6.34621 s, 5.3 MB/s 00:34:48.344 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:48.344 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:48.344 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:48.345 [2024-10-09 14:03:54.816581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.345 [2024-10-09 14:03:54.848636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.345 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.604 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:48.604 "name": "raid_bdev1", 00:34:48.604 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:48.604 "strip_size_kb": 0, 00:34:48.604 "state": "online", 00:34:48.604 "raid_level": "raid1", 00:34:48.604 "superblock": false, 00:34:48.604 "num_base_bdevs": 4, 00:34:48.604 "num_base_bdevs_discovered": 3, 00:34:48.604 "num_base_bdevs_operational": 3, 00:34:48.604 "base_bdevs_list": [ 00:34:48.604 { 00:34:48.604 "name": null, 00:34:48.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.604 "is_configured": false, 00:34:48.604 "data_offset": 0, 00:34:48.604 "data_size": 65536 00:34:48.604 }, 00:34:48.604 { 00:34:48.604 "name": "BaseBdev2", 00:34:48.604 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:48.604 "is_configured": true, 00:34:48.604 "data_offset": 0, 00:34:48.604 "data_size": 65536 00:34:48.604 }, 00:34:48.604 { 00:34:48.604 "name": "BaseBdev3", 00:34:48.604 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:48.604 "is_configured": true, 00:34:48.604 "data_offset": 0, 00:34:48.604 "data_size": 65536 00:34:48.604 }, 00:34:48.604 { 00:34:48.604 "name": "BaseBdev4", 00:34:48.604 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:48.604 "is_configured": true, 00:34:48.604 "data_offset": 0, 00:34:48.604 "data_size": 65536 00:34:48.604 } 00:34:48.604 ] 00:34:48.604 }' 00:34:48.604 14:03:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:48.604 14:03:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.863 14:03:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:48.863 14:03:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.863 14:03:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.863 [2024-10-09 14:03:55.252775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.863 [2024-10-09 14:03:55.256242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:34:48.863 [2024-10-09 14:03:55.258510] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:48.863 14:03:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.863 14:03:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:49.834 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.835 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:49.835 "name": "raid_bdev1", 00:34:49.835 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:49.835 "strip_size_kb": 0, 00:34:49.835 "state": "online", 00:34:49.835 "raid_level": "raid1", 00:34:49.835 "superblock": false, 00:34:49.835 "num_base_bdevs": 4, 00:34:49.835 "num_base_bdevs_discovered": 4, 00:34:49.835 "num_base_bdevs_operational": 4, 00:34:49.835 "process": { 00:34:49.835 "type": "rebuild", 00:34:49.835 "target": "spare", 00:34:49.835 "progress": { 00:34:49.835 "blocks": 20480, 00:34:49.835 "percent": 31 00:34:49.835 } 00:34:49.835 }, 00:34:49.835 "base_bdevs_list": [ 00:34:49.835 { 00:34:49.835 "name": "spare", 00:34:49.835 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:49.835 "is_configured": true, 00:34:49.835 "data_offset": 0, 00:34:49.835 "data_size": 65536 00:34:49.835 }, 00:34:49.835 { 00:34:49.835 "name": "BaseBdev2", 00:34:49.835 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:49.835 "is_configured": true, 00:34:49.835 "data_offset": 0, 00:34:49.835 "data_size": 65536 00:34:49.835 }, 00:34:49.835 { 00:34:49.835 "name": "BaseBdev3", 00:34:49.835 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:49.835 "is_configured": true, 00:34:49.835 "data_offset": 0, 00:34:49.835 "data_size": 65536 00:34:49.835 }, 00:34:49.835 { 00:34:49.835 "name": "BaseBdev4", 00:34:49.835 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:49.835 "is_configured": true, 00:34:49.835 "data_offset": 0, 00:34:49.835 "data_size": 65536 00:34:49.835 } 00:34:49.835 ] 00:34:49.835 }' 00:34:49.835 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.094 [2024-10-09 14:03:56.415862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:50.094 [2024-10-09 14:03:56.466062] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:50.094 [2024-10-09 14:03:56.466124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:50.094 [2024-10-09 14:03:56.466145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:50.094 [2024-10-09 14:03:56.466154] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:50.094 "name": "raid_bdev1", 00:34:50.094 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:50.094 "strip_size_kb": 0, 00:34:50.094 "state": "online", 00:34:50.094 "raid_level": "raid1", 00:34:50.094 "superblock": false, 00:34:50.094 "num_base_bdevs": 4, 00:34:50.094 "num_base_bdevs_discovered": 3, 00:34:50.094 "num_base_bdevs_operational": 3, 00:34:50.094 "base_bdevs_list": [ 00:34:50.094 { 00:34:50.094 "name": null, 00:34:50.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.094 "is_configured": false, 00:34:50.094 "data_offset": 0, 00:34:50.094 "data_size": 65536 00:34:50.094 }, 00:34:50.094 { 00:34:50.094 "name": "BaseBdev2", 00:34:50.094 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:50.094 "is_configured": true, 00:34:50.094 "data_offset": 0, 00:34:50.094 "data_size": 65536 00:34:50.094 }, 00:34:50.094 { 00:34:50.094 "name": "BaseBdev3", 00:34:50.094 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:50.094 "is_configured": true, 00:34:50.094 "data_offset": 0, 00:34:50.094 "data_size": 65536 00:34:50.094 }, 00:34:50.094 { 00:34:50.094 "name": "BaseBdev4", 00:34:50.094 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:50.094 "is_configured": true, 00:34:50.094 "data_offset": 0, 00:34:50.094 "data_size": 65536 00:34:50.094 } 00:34:50.094 ] 00:34:50.094 }' 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:50.094 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.661 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:50.661 "name": "raid_bdev1", 00:34:50.661 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:50.661 "strip_size_kb": 0, 00:34:50.661 "state": "online", 00:34:50.661 "raid_level": "raid1", 00:34:50.661 "superblock": false, 00:34:50.662 "num_base_bdevs": 4, 00:34:50.662 "num_base_bdevs_discovered": 3, 00:34:50.662 "num_base_bdevs_operational": 3, 00:34:50.662 "base_bdevs_list": [ 00:34:50.662 { 00:34:50.662 "name": null, 00:34:50.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.662 "is_configured": false, 00:34:50.662 "data_offset": 0, 00:34:50.662 "data_size": 65536 00:34:50.662 }, 00:34:50.662 { 00:34:50.662 "name": "BaseBdev2", 00:34:50.662 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:50.662 "is_configured": true, 00:34:50.662 "data_offset": 0, 00:34:50.662 "data_size": 65536 00:34:50.662 }, 00:34:50.662 { 00:34:50.662 "name": "BaseBdev3", 00:34:50.662 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:50.662 "is_configured": true, 00:34:50.662 "data_offset": 0, 00:34:50.662 "data_size": 65536 00:34:50.662 }, 00:34:50.662 { 00:34:50.662 "name": "BaseBdev4", 00:34:50.662 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:50.662 "is_configured": true, 00:34:50.662 "data_offset": 0, 00:34:50.662 "data_size": 65536 00:34:50.662 } 00:34:50.662 ] 00:34:50.662 }' 00:34:50.662 14:03:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.662 [2024-10-09 14:03:57.062490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:50.662 [2024-10-09 14:03:57.065842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:34:50.662 [2024-10-09 14:03:57.068213] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:50.662 14:03:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:51.598 "name": "raid_bdev1", 00:34:51.598 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:51.598 "strip_size_kb": 0, 00:34:51.598 "state": "online", 00:34:51.598 "raid_level": "raid1", 00:34:51.598 "superblock": false, 00:34:51.598 "num_base_bdevs": 4, 00:34:51.598 "num_base_bdevs_discovered": 4, 00:34:51.598 "num_base_bdevs_operational": 4, 00:34:51.598 "process": { 00:34:51.598 "type": "rebuild", 00:34:51.598 "target": "spare", 00:34:51.598 "progress": { 00:34:51.598 "blocks": 20480, 00:34:51.598 "percent": 31 00:34:51.598 } 00:34:51.598 }, 00:34:51.598 "base_bdevs_list": [ 00:34:51.598 { 00:34:51.598 "name": "spare", 00:34:51.598 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:51.598 "is_configured": true, 00:34:51.598 "data_offset": 0, 00:34:51.598 "data_size": 65536 00:34:51.598 }, 00:34:51.598 { 00:34:51.598 "name": "BaseBdev2", 00:34:51.598 "uuid": "3e803848-ef41-5825-86c8-23614350a664", 00:34:51.598 "is_configured": true, 00:34:51.598 "data_offset": 0, 00:34:51.598 "data_size": 65536 00:34:51.598 }, 00:34:51.598 { 00:34:51.598 "name": "BaseBdev3", 00:34:51.598 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:51.598 "is_configured": true, 00:34:51.598 "data_offset": 0, 00:34:51.598 "data_size": 65536 00:34:51.598 }, 00:34:51.598 { 00:34:51.598 "name": "BaseBdev4", 00:34:51.598 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:51.598 "is_configured": true, 00:34:51.598 "data_offset": 0, 00:34:51.598 "data_size": 65536 00:34:51.598 } 00:34:51.598 ] 00:34:51.598 }' 00:34:51.598 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.857 [2024-10-09 14:03:58.209155] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:51.857 [2024-10-09 14:03:58.274547] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:51.857 "name": "raid_bdev1", 00:34:51.857 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:51.857 "strip_size_kb": 0, 00:34:51.857 "state": "online", 00:34:51.857 "raid_level": "raid1", 00:34:51.857 "superblock": false, 00:34:51.857 "num_base_bdevs": 4, 00:34:51.857 "num_base_bdevs_discovered": 3, 00:34:51.857 "num_base_bdevs_operational": 3, 00:34:51.857 "process": { 00:34:51.857 "type": "rebuild", 00:34:51.857 "target": "spare", 00:34:51.857 "progress": { 00:34:51.857 "blocks": 24576, 00:34:51.857 "percent": 37 00:34:51.857 } 00:34:51.857 }, 00:34:51.857 "base_bdevs_list": [ 00:34:51.857 { 00:34:51.857 "name": "spare", 00:34:51.857 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:51.857 "is_configured": true, 00:34:51.857 "data_offset": 0, 00:34:51.857 "data_size": 65536 00:34:51.857 }, 00:34:51.857 { 00:34:51.857 "name": null, 00:34:51.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.857 "is_configured": false, 00:34:51.857 "data_offset": 0, 00:34:51.857 "data_size": 65536 00:34:51.857 }, 00:34:51.857 { 00:34:51.857 "name": "BaseBdev3", 00:34:51.857 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:51.857 "is_configured": true, 00:34:51.857 "data_offset": 0, 00:34:51.857 "data_size": 65536 00:34:51.857 }, 00:34:51.857 { 00:34:51.857 "name": "BaseBdev4", 00:34:51.857 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:51.857 "is_configured": true, 00:34:51.857 "data_offset": 0, 00:34:51.857 "data_size": 65536 00:34:51.857 } 00:34:51.857 ] 00:34:51.857 }' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:51.857 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=372 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.116 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:52.116 "name": "raid_bdev1", 00:34:52.116 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:52.116 "strip_size_kb": 0, 00:34:52.116 "state": "online", 00:34:52.116 "raid_level": "raid1", 00:34:52.116 "superblock": false, 00:34:52.116 "num_base_bdevs": 4, 00:34:52.116 "num_base_bdevs_discovered": 3, 00:34:52.116 "num_base_bdevs_operational": 3, 00:34:52.116 "process": { 00:34:52.116 "type": "rebuild", 00:34:52.116 "target": "spare", 00:34:52.116 "progress": { 00:34:52.116 "blocks": 26624, 00:34:52.116 "percent": 40 00:34:52.116 } 00:34:52.116 }, 00:34:52.116 "base_bdevs_list": [ 00:34:52.116 { 00:34:52.116 "name": "spare", 00:34:52.116 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:52.116 "is_configured": true, 00:34:52.116 "data_offset": 0, 00:34:52.116 "data_size": 65536 00:34:52.116 }, 00:34:52.116 { 00:34:52.116 "name": null, 00:34:52.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.116 "is_configured": false, 00:34:52.116 "data_offset": 0, 00:34:52.116 "data_size": 65536 00:34:52.116 }, 00:34:52.116 { 00:34:52.116 "name": "BaseBdev3", 00:34:52.117 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:52.117 "is_configured": true, 00:34:52.117 "data_offset": 0, 00:34:52.117 "data_size": 65536 00:34:52.117 }, 00:34:52.117 { 00:34:52.117 "name": "BaseBdev4", 00:34:52.117 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:52.117 "is_configured": true, 00:34:52.117 "data_offset": 0, 00:34:52.117 "data_size": 65536 00:34:52.117 } 00:34:52.117 ] 00:34:52.117 }' 00:34:52.117 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:52.117 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:52.117 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:52.117 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:52.117 14:03:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:53.052 14:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.053 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.053 14:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 14:03:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:53.311 "name": "raid_bdev1", 00:34:53.311 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:53.311 "strip_size_kb": 0, 00:34:53.311 "state": "online", 00:34:53.311 "raid_level": "raid1", 00:34:53.311 "superblock": false, 00:34:53.311 "num_base_bdevs": 4, 00:34:53.311 "num_base_bdevs_discovered": 3, 00:34:53.311 "num_base_bdevs_operational": 3, 00:34:53.311 "process": { 00:34:53.311 "type": "rebuild", 00:34:53.311 "target": "spare", 00:34:53.311 "progress": { 00:34:53.311 "blocks": 49152, 00:34:53.311 "percent": 75 00:34:53.311 } 00:34:53.311 }, 00:34:53.311 "base_bdevs_list": [ 00:34:53.311 { 00:34:53.311 "name": "spare", 00:34:53.311 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:53.311 "is_configured": true, 00:34:53.311 "data_offset": 0, 00:34:53.311 "data_size": 65536 00:34:53.311 }, 00:34:53.311 { 00:34:53.311 "name": null, 00:34:53.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.311 "is_configured": false, 00:34:53.311 "data_offset": 0, 00:34:53.311 "data_size": 65536 00:34:53.311 }, 00:34:53.311 { 00:34:53.311 "name": "BaseBdev3", 00:34:53.311 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:53.311 "is_configured": true, 00:34:53.311 "data_offset": 0, 00:34:53.311 "data_size": 65536 00:34:53.311 }, 00:34:53.311 { 00:34:53.311 "name": "BaseBdev4", 00:34:53.311 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:53.311 "is_configured": true, 00:34:53.311 "data_offset": 0, 00:34:53.311 "data_size": 65536 00:34:53.311 } 00:34:53.311 ] 00:34:53.311 }' 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:53.311 14:03:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:53.877 [2024-10-09 14:04:00.285876] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:53.877 [2024-10-09 14:04:00.286096] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:53.877 [2024-10-09 14:04:00.286153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:54.443 "name": "raid_bdev1", 00:34:54.443 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:54.443 "strip_size_kb": 0, 00:34:54.443 "state": "online", 00:34:54.443 "raid_level": "raid1", 00:34:54.443 "superblock": false, 00:34:54.443 "num_base_bdevs": 4, 00:34:54.443 "num_base_bdevs_discovered": 3, 00:34:54.443 "num_base_bdevs_operational": 3, 00:34:54.443 "base_bdevs_list": [ 00:34:54.443 { 00:34:54.443 "name": "spare", 00:34:54.443 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": null, 00:34:54.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.443 "is_configured": false, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": "BaseBdev3", 00:34:54.443 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": "BaseBdev4", 00:34:54.443 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 } 00:34:54.443 ] 00:34:54.443 }' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:54.443 "name": "raid_bdev1", 00:34:54.443 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:54.443 "strip_size_kb": 0, 00:34:54.443 "state": "online", 00:34:54.443 "raid_level": "raid1", 00:34:54.443 "superblock": false, 00:34:54.443 "num_base_bdevs": 4, 00:34:54.443 "num_base_bdevs_discovered": 3, 00:34:54.443 "num_base_bdevs_operational": 3, 00:34:54.443 "base_bdevs_list": [ 00:34:54.443 { 00:34:54.443 "name": "spare", 00:34:54.443 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": null, 00:34:54.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.443 "is_configured": false, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": "BaseBdev3", 00:34:54.443 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 }, 00:34:54.443 { 00:34:54.443 "name": "BaseBdev4", 00:34:54.443 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:54.443 "is_configured": true, 00:34:54.443 "data_offset": 0, 00:34:54.443 "data_size": 65536 00:34:54.443 } 00:34:54.443 ] 00:34:54.443 }' 00:34:54.443 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.444 14:04:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.701 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.701 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.701 "name": "raid_bdev1", 00:34:54.701 "uuid": "423e8079-6cb3-4619-aeaf-6ac35786cae4", 00:34:54.701 "strip_size_kb": 0, 00:34:54.701 "state": "online", 00:34:54.701 "raid_level": "raid1", 00:34:54.701 "superblock": false, 00:34:54.701 "num_base_bdevs": 4, 00:34:54.701 "num_base_bdevs_discovered": 3, 00:34:54.701 "num_base_bdevs_operational": 3, 00:34:54.701 "base_bdevs_list": [ 00:34:54.701 { 00:34:54.701 "name": "spare", 00:34:54.701 "uuid": "c8a51904-ed0a-5471-b7d3-cf094de9c05d", 00:34:54.701 "is_configured": true, 00:34:54.701 "data_offset": 0, 00:34:54.701 "data_size": 65536 00:34:54.701 }, 00:34:54.701 { 00:34:54.701 "name": null, 00:34:54.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.702 "is_configured": false, 00:34:54.702 "data_offset": 0, 00:34:54.702 "data_size": 65536 00:34:54.702 }, 00:34:54.702 { 00:34:54.702 "name": "BaseBdev3", 00:34:54.702 "uuid": "26b41d59-0417-5496-b4e1-b1c402aae41b", 00:34:54.702 "is_configured": true, 00:34:54.702 "data_offset": 0, 00:34:54.702 "data_size": 65536 00:34:54.702 }, 00:34:54.702 { 00:34:54.702 "name": "BaseBdev4", 00:34:54.702 "uuid": "284e55be-43db-5344-9cfb-0ab775c95046", 00:34:54.702 "is_configured": true, 00:34:54.702 "data_offset": 0, 00:34:54.702 "data_size": 65536 00:34:54.702 } 00:34:54.702 ] 00:34:54.702 }' 00:34:54.702 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.702 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.961 [2024-10-09 14:04:01.434203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:54.961 [2024-10-09 14:04:01.434352] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:54.961 [2024-10-09 14:04:01.434522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:54.961 [2024-10-09 14:04:01.434659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:54.961 [2024-10-09 14:04:01.434845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:54.961 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:55.219 /dev/nbd0 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:55.478 1+0 records in 00:34:55.478 1+0 records out 00:34:55.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228541 s, 17.9 MB/s 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:55.478 14:04:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:55.737 /dev/nbd1 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:55.737 1+0 records in 00:34:55.737 1+0 records out 00:34:55.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299849 s, 13.7 MB/s 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:55.737 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:55.995 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:55.996 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:55.996 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:55.996 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88578 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88578 ']' 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88578 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88578 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:56.254 killing process with pid 88578 00:34:56.254 Received shutdown signal, test time was about 60.000000 seconds 00:34:56.254 00:34:56.254 Latency(us) 00:34:56.254 [2024-10-09T14:04:02.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.254 [2024-10-09T14:04:02.805Z] =================================================================================================================== 00:34:56.254 [2024-10-09T14:04:02.805Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88578' 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88578 00:34:56.254 [2024-10-09 14:04:02.669698] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:56.254 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88578 00:34:56.254 [2024-10-09 14:04:02.720535] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:56.513 14:04:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:34:56.513 00:34:56.513 real 0m16.171s 00:34:56.513 user 0m18.214s 00:34:56.513 sys 0m3.624s 00:34:56.513 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:56.513 14:04:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.513 ************************************ 00:34:56.513 END TEST raid_rebuild_test 00:34:56.513 ************************************ 00:34:56.513 14:04:03 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:34:56.513 14:04:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:34:56.513 14:04:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:56.513 14:04:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:56.513 ************************************ 00:34:56.513 START TEST raid_rebuild_test_sb 00:34:56.513 ************************************ 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=89012 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 89012 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 89012 ']' 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.513 14:04:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:56.772 [2024-10-09 14:04:03.106890] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:34:56.772 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:56.772 Zero copy mechanism will not be used. 00:34:56.772 [2024-10-09 14:04:03.107042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89012 ] 00:34:56.772 [2024-10-09 14:04:03.265238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.772 [2024-10-09 14:04:03.310976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.030 [2024-10-09 14:04:03.355390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:57.030 [2024-10-09 14:04:03.355426] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.598 BaseBdev1_malloc 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.598 [2024-10-09 14:04:04.084358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:57.598 [2024-10-09 14:04:04.084422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.598 [2024-10-09 14:04:04.084457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:57.598 [2024-10-09 14:04:04.084476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.598 [2024-10-09 14:04:04.087031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.598 [2024-10-09 14:04:04.087071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:57.598 BaseBdev1 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.598 BaseBdev2_malloc 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.598 [2024-10-09 14:04:04.123740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:57.598 [2024-10-09 14:04:04.123793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.598 [2024-10-09 14:04:04.123817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:57.598 [2024-10-09 14:04:04.123829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.598 [2024-10-09 14:04:04.126292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.598 [2024-10-09 14:04:04.126331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:57.598 BaseBdev2 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.598 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 BaseBdev3_malloc 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 [2024-10-09 14:04:04.153136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:57.859 [2024-10-09 14:04:04.153188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.859 [2024-10-09 14:04:04.153217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:57.859 [2024-10-09 14:04:04.153228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.859 [2024-10-09 14:04:04.155843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.859 [2024-10-09 14:04:04.155880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:57.859 BaseBdev3 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 BaseBdev4_malloc 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 [2024-10-09 14:04:04.182407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:57.859 [2024-10-09 14:04:04.182468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.859 [2024-10-09 14:04:04.182496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:57.859 [2024-10-09 14:04:04.182508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.859 [2024-10-09 14:04:04.184996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.859 [2024-10-09 14:04:04.185033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:57.859 BaseBdev4 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 spare_malloc 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 spare_delay 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.859 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 [2024-10-09 14:04:04.223684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:57.859 [2024-10-09 14:04:04.223738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.859 [2024-10-09 14:04:04.223779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:57.859 [2024-10-09 14:04:04.223790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.859 [2024-10-09 14:04:04.226265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.859 [2024-10-09 14:04:04.226423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:57.859 spare 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.860 [2024-10-09 14:04:04.235779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:57.860 [2024-10-09 14:04:04.238101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:57.860 [2024-10-09 14:04:04.238174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:57.860 [2024-10-09 14:04:04.238216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:57.860 [2024-10-09 14:04:04.238379] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:34:57.860 [2024-10-09 14:04:04.238392] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:34:57.860 [2024-10-09 14:04:04.238686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:57.860 [2024-10-09 14:04:04.238826] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:34:57.860 [2024-10-09 14:04:04.238842] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:34:57.860 [2024-10-09 14:04:04.238968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.860 "name": "raid_bdev1", 00:34:57.860 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:34:57.860 "strip_size_kb": 0, 00:34:57.860 "state": "online", 00:34:57.860 "raid_level": "raid1", 00:34:57.860 "superblock": true, 00:34:57.860 "num_base_bdevs": 4, 00:34:57.860 "num_base_bdevs_discovered": 4, 00:34:57.860 "num_base_bdevs_operational": 4, 00:34:57.860 "base_bdevs_list": [ 00:34:57.860 { 00:34:57.860 "name": "BaseBdev1", 00:34:57.860 "uuid": "1d1dda2b-9563-5936-b238-0b7d6c979e6b", 00:34:57.860 "is_configured": true, 00:34:57.860 "data_offset": 2048, 00:34:57.860 "data_size": 63488 00:34:57.860 }, 00:34:57.860 { 00:34:57.860 "name": "BaseBdev2", 00:34:57.860 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:34:57.860 "is_configured": true, 00:34:57.860 "data_offset": 2048, 00:34:57.860 "data_size": 63488 00:34:57.860 }, 00:34:57.860 { 00:34:57.860 "name": "BaseBdev3", 00:34:57.860 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:34:57.860 "is_configured": true, 00:34:57.860 "data_offset": 2048, 00:34:57.860 "data_size": 63488 00:34:57.860 }, 00:34:57.860 { 00:34:57.860 "name": "BaseBdev4", 00:34:57.860 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:34:57.860 "is_configured": true, 00:34:57.860 "data_offset": 2048, 00:34:57.860 "data_size": 63488 00:34:57.860 } 00:34:57.860 ] 00:34:57.860 }' 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.860 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:58.431 [2024-10-09 14:04:04.704163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:58.431 14:04:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:58.431 [2024-10-09 14:04:04.979983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:58.690 /dev/nbd0 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:34:58.690 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:58.691 1+0 records in 00:34:58.691 1+0 records out 00:34:58.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350874 s, 11.7 MB/s 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:34:58.691 14:04:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:35:05.253 63488+0 records in 00:35:05.253 63488+0 records out 00:35:05.253 32505856 bytes (33 MB, 31 MiB) copied, 6.21371 s, 5.2 MB/s 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:05.253 [2024-10-09 14:04:11.525725] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.253 [2024-10-09 14:04:11.533811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:05.253 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.254 "name": "raid_bdev1", 00:35:05.254 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:05.254 "strip_size_kb": 0, 00:35:05.254 "state": "online", 00:35:05.254 "raid_level": "raid1", 00:35:05.254 "superblock": true, 00:35:05.254 "num_base_bdevs": 4, 00:35:05.254 "num_base_bdevs_discovered": 3, 00:35:05.254 "num_base_bdevs_operational": 3, 00:35:05.254 "base_bdevs_list": [ 00:35:05.254 { 00:35:05.254 "name": null, 00:35:05.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:05.254 "is_configured": false, 00:35:05.254 "data_offset": 0, 00:35:05.254 "data_size": 63488 00:35:05.254 }, 00:35:05.254 { 00:35:05.254 "name": "BaseBdev2", 00:35:05.254 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:35:05.254 "is_configured": true, 00:35:05.254 "data_offset": 2048, 00:35:05.254 "data_size": 63488 00:35:05.254 }, 00:35:05.254 { 00:35:05.254 "name": "BaseBdev3", 00:35:05.254 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:05.254 "is_configured": true, 00:35:05.254 "data_offset": 2048, 00:35:05.254 "data_size": 63488 00:35:05.254 }, 00:35:05.254 { 00:35:05.254 "name": "BaseBdev4", 00:35:05.254 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:05.254 "is_configured": true, 00:35:05.254 "data_offset": 2048, 00:35:05.254 "data_size": 63488 00:35:05.254 } 00:35:05.254 ] 00:35:05.254 }' 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.254 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.513 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:05.513 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.513 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:05.513 [2024-10-09 14:04:11.965929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:05.513 [2024-10-09 14:04:11.969477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:35:05.513 [2024-10-09 14:04:11.971766] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:05.513 14:04:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.513 14:04:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.448 14:04:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:06.707 "name": "raid_bdev1", 00:35:06.707 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:06.707 "strip_size_kb": 0, 00:35:06.707 "state": "online", 00:35:06.707 "raid_level": "raid1", 00:35:06.707 "superblock": true, 00:35:06.707 "num_base_bdevs": 4, 00:35:06.707 "num_base_bdevs_discovered": 4, 00:35:06.707 "num_base_bdevs_operational": 4, 00:35:06.707 "process": { 00:35:06.707 "type": "rebuild", 00:35:06.707 "target": "spare", 00:35:06.707 "progress": { 00:35:06.707 "blocks": 20480, 00:35:06.707 "percent": 32 00:35:06.707 } 00:35:06.707 }, 00:35:06.707 "base_bdevs_list": [ 00:35:06.707 { 00:35:06.707 "name": "spare", 00:35:06.707 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:06.707 "is_configured": true, 00:35:06.707 "data_offset": 2048, 00:35:06.707 "data_size": 63488 00:35:06.707 }, 00:35:06.707 { 00:35:06.707 "name": "BaseBdev2", 00:35:06.707 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:35:06.707 "is_configured": true, 00:35:06.707 "data_offset": 2048, 00:35:06.707 "data_size": 63488 00:35:06.707 }, 00:35:06.707 { 00:35:06.707 "name": "BaseBdev3", 00:35:06.707 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:06.707 "is_configured": true, 00:35:06.707 "data_offset": 2048, 00:35:06.707 "data_size": 63488 00:35:06.707 }, 00:35:06.707 { 00:35:06.707 "name": "BaseBdev4", 00:35:06.707 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:06.707 "is_configured": true, 00:35:06.707 "data_offset": 2048, 00:35:06.707 "data_size": 63488 00:35:06.707 } 00:35:06.707 ] 00:35:06.707 }' 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.707 [2024-10-09 14:04:13.128458] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:06.707 [2024-10-09 14:04:13.179281] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:06.707 [2024-10-09 14:04:13.179503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:06.707 [2024-10-09 14:04:13.179620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:06.707 [2024-10-09 14:04:13.179661] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:06.707 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:06.708 "name": "raid_bdev1", 00:35:06.708 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:06.708 "strip_size_kb": 0, 00:35:06.708 "state": "online", 00:35:06.708 "raid_level": "raid1", 00:35:06.708 "superblock": true, 00:35:06.708 "num_base_bdevs": 4, 00:35:06.708 "num_base_bdevs_discovered": 3, 00:35:06.708 "num_base_bdevs_operational": 3, 00:35:06.708 "base_bdevs_list": [ 00:35:06.708 { 00:35:06.708 "name": null, 00:35:06.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:06.708 "is_configured": false, 00:35:06.708 "data_offset": 0, 00:35:06.708 "data_size": 63488 00:35:06.708 }, 00:35:06.708 { 00:35:06.708 "name": "BaseBdev2", 00:35:06.708 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:35:06.708 "is_configured": true, 00:35:06.708 "data_offset": 2048, 00:35:06.708 "data_size": 63488 00:35:06.708 }, 00:35:06.708 { 00:35:06.708 "name": "BaseBdev3", 00:35:06.708 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:06.708 "is_configured": true, 00:35:06.708 "data_offset": 2048, 00:35:06.708 "data_size": 63488 00:35:06.708 }, 00:35:06.708 { 00:35:06.708 "name": "BaseBdev4", 00:35:06.708 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:06.708 "is_configured": true, 00:35:06.708 "data_offset": 2048, 00:35:06.708 "data_size": 63488 00:35:06.708 } 00:35:06.708 ] 00:35:06.708 }' 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:06.708 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.275 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:07.275 "name": "raid_bdev1", 00:35:07.275 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:07.275 "strip_size_kb": 0, 00:35:07.275 "state": "online", 00:35:07.275 "raid_level": "raid1", 00:35:07.275 "superblock": true, 00:35:07.275 "num_base_bdevs": 4, 00:35:07.275 "num_base_bdevs_discovered": 3, 00:35:07.275 "num_base_bdevs_operational": 3, 00:35:07.275 "base_bdevs_list": [ 00:35:07.275 { 00:35:07.275 "name": null, 00:35:07.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:07.275 "is_configured": false, 00:35:07.275 "data_offset": 0, 00:35:07.275 "data_size": 63488 00:35:07.275 }, 00:35:07.275 { 00:35:07.275 "name": "BaseBdev2", 00:35:07.275 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:35:07.276 "is_configured": true, 00:35:07.276 "data_offset": 2048, 00:35:07.276 "data_size": 63488 00:35:07.276 }, 00:35:07.276 { 00:35:07.276 "name": "BaseBdev3", 00:35:07.276 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:07.276 "is_configured": true, 00:35:07.276 "data_offset": 2048, 00:35:07.276 "data_size": 63488 00:35:07.276 }, 00:35:07.276 { 00:35:07.276 "name": "BaseBdev4", 00:35:07.276 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:07.276 "is_configured": true, 00:35:07.276 "data_offset": 2048, 00:35:07.276 "data_size": 63488 00:35:07.276 } 00:35:07.276 ] 00:35:07.276 }' 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.276 [2024-10-09 14:04:13.780214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:07.276 [2024-10-09 14:04:13.783681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:35:07.276 [2024-10-09 14:04:13.785943] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.276 14:04:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.652 "name": "raid_bdev1", 00:35:08.652 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:08.652 "strip_size_kb": 0, 00:35:08.652 "state": "online", 00:35:08.652 "raid_level": "raid1", 00:35:08.652 "superblock": true, 00:35:08.652 "num_base_bdevs": 4, 00:35:08.652 "num_base_bdevs_discovered": 4, 00:35:08.652 "num_base_bdevs_operational": 4, 00:35:08.652 "process": { 00:35:08.652 "type": "rebuild", 00:35:08.652 "target": "spare", 00:35:08.652 "progress": { 00:35:08.652 "blocks": 20480, 00:35:08.652 "percent": 32 00:35:08.652 } 00:35:08.652 }, 00:35:08.652 "base_bdevs_list": [ 00:35:08.652 { 00:35:08.652 "name": "spare", 00:35:08.652 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": "BaseBdev2", 00:35:08.652 "uuid": "6d654f7c-9a9d-59f8-8d39-b642bb4090f4", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": "BaseBdev3", 00:35:08.652 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": "BaseBdev4", 00:35:08.652 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 } 00:35:08.652 ] 00:35:08.652 }' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:35:08.652 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.652 14:04:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 [2024-10-09 14:04:14.938959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:08.652 [2024-10-09 14:04:15.092422] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.652 "name": "raid_bdev1", 00:35:08.652 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:08.652 "strip_size_kb": 0, 00:35:08.652 "state": "online", 00:35:08.652 "raid_level": "raid1", 00:35:08.652 "superblock": true, 00:35:08.652 "num_base_bdevs": 4, 00:35:08.652 "num_base_bdevs_discovered": 3, 00:35:08.652 "num_base_bdevs_operational": 3, 00:35:08.652 "process": { 00:35:08.652 "type": "rebuild", 00:35:08.652 "target": "spare", 00:35:08.652 "progress": { 00:35:08.652 "blocks": 24576, 00:35:08.652 "percent": 38 00:35:08.652 } 00:35:08.652 }, 00:35:08.652 "base_bdevs_list": [ 00:35:08.652 { 00:35:08.652 "name": "spare", 00:35:08.652 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": null, 00:35:08.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.652 "is_configured": false, 00:35:08.652 "data_offset": 0, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": "BaseBdev3", 00:35:08.652 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 }, 00:35:08.652 { 00:35:08.652 "name": "BaseBdev4", 00:35:08.652 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:08.652 "is_configured": true, 00:35:08.652 "data_offset": 2048, 00:35:08.652 "data_size": 63488 00:35:08.652 } 00:35:08.652 ] 00:35:08.652 }' 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:08.652 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.910 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:08.910 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:35:08.910 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:08.911 "name": "raid_bdev1", 00:35:08.911 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:08.911 "strip_size_kb": 0, 00:35:08.911 "state": "online", 00:35:08.911 "raid_level": "raid1", 00:35:08.911 "superblock": true, 00:35:08.911 "num_base_bdevs": 4, 00:35:08.911 "num_base_bdevs_discovered": 3, 00:35:08.911 "num_base_bdevs_operational": 3, 00:35:08.911 "process": { 00:35:08.911 "type": "rebuild", 00:35:08.911 "target": "spare", 00:35:08.911 "progress": { 00:35:08.911 "blocks": 26624, 00:35:08.911 "percent": 41 00:35:08.911 } 00:35:08.911 }, 00:35:08.911 "base_bdevs_list": [ 00:35:08.911 { 00:35:08.911 "name": "spare", 00:35:08.911 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:08.911 "is_configured": true, 00:35:08.911 "data_offset": 2048, 00:35:08.911 "data_size": 63488 00:35:08.911 }, 00:35:08.911 { 00:35:08.911 "name": null, 00:35:08.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.911 "is_configured": false, 00:35:08.911 "data_offset": 0, 00:35:08.911 "data_size": 63488 00:35:08.911 }, 00:35:08.911 { 00:35:08.911 "name": "BaseBdev3", 00:35:08.911 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:08.911 "is_configured": true, 00:35:08.911 "data_offset": 2048, 00:35:08.911 "data_size": 63488 00:35:08.911 }, 00:35:08.911 { 00:35:08.911 "name": "BaseBdev4", 00:35:08.911 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:08.911 "is_configured": true, 00:35:08.911 "data_offset": 2048, 00:35:08.911 "data_size": 63488 00:35:08.911 } 00:35:08.911 ] 00:35:08.911 }' 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:08.911 14:04:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.846 14:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:10.104 "name": "raid_bdev1", 00:35:10.104 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:10.104 "strip_size_kb": 0, 00:35:10.104 "state": "online", 00:35:10.104 "raid_level": "raid1", 00:35:10.104 "superblock": true, 00:35:10.104 "num_base_bdevs": 4, 00:35:10.104 "num_base_bdevs_discovered": 3, 00:35:10.104 "num_base_bdevs_operational": 3, 00:35:10.104 "process": { 00:35:10.104 "type": "rebuild", 00:35:10.104 "target": "spare", 00:35:10.104 "progress": { 00:35:10.104 "blocks": 49152, 00:35:10.104 "percent": 77 00:35:10.104 } 00:35:10.104 }, 00:35:10.104 "base_bdevs_list": [ 00:35:10.104 { 00:35:10.104 "name": "spare", 00:35:10.104 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:10.104 "is_configured": true, 00:35:10.104 "data_offset": 2048, 00:35:10.104 "data_size": 63488 00:35:10.104 }, 00:35:10.104 { 00:35:10.104 "name": null, 00:35:10.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.104 "is_configured": false, 00:35:10.104 "data_offset": 0, 00:35:10.104 "data_size": 63488 00:35:10.104 }, 00:35:10.104 { 00:35:10.104 "name": "BaseBdev3", 00:35:10.104 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:10.104 "is_configured": true, 00:35:10.104 "data_offset": 2048, 00:35:10.104 "data_size": 63488 00:35:10.104 }, 00:35:10.104 { 00:35:10.104 "name": "BaseBdev4", 00:35:10.104 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:10.104 "is_configured": true, 00:35:10.104 "data_offset": 2048, 00:35:10.104 "data_size": 63488 00:35:10.104 } 00:35:10.104 ] 00:35:10.104 }' 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:10.104 14:04:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:10.671 [2024-10-09 14:04:17.003331] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:10.671 [2024-10-09 14:04:17.003409] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:10.671 [2024-10-09 14:04:17.003520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:11.239 "name": "raid_bdev1", 00:35:11.239 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:11.239 "strip_size_kb": 0, 00:35:11.239 "state": "online", 00:35:11.239 "raid_level": "raid1", 00:35:11.239 "superblock": true, 00:35:11.239 "num_base_bdevs": 4, 00:35:11.239 "num_base_bdevs_discovered": 3, 00:35:11.239 "num_base_bdevs_operational": 3, 00:35:11.239 "base_bdevs_list": [ 00:35:11.239 { 00:35:11.239 "name": "spare", 00:35:11.239 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": null, 00:35:11.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.239 "is_configured": false, 00:35:11.239 "data_offset": 0, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": "BaseBdev3", 00:35:11.239 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": "BaseBdev4", 00:35:11.239 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 } 00:35:11.239 ] 00:35:11.239 }' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:11.239 "name": "raid_bdev1", 00:35:11.239 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:11.239 "strip_size_kb": 0, 00:35:11.239 "state": "online", 00:35:11.239 "raid_level": "raid1", 00:35:11.239 "superblock": true, 00:35:11.239 "num_base_bdevs": 4, 00:35:11.239 "num_base_bdevs_discovered": 3, 00:35:11.239 "num_base_bdevs_operational": 3, 00:35:11.239 "base_bdevs_list": [ 00:35:11.239 { 00:35:11.239 "name": "spare", 00:35:11.239 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": null, 00:35:11.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.239 "is_configured": false, 00:35:11.239 "data_offset": 0, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": "BaseBdev3", 00:35:11.239 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 }, 00:35:11.239 { 00:35:11.239 "name": "BaseBdev4", 00:35:11.239 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:11.239 "is_configured": true, 00:35:11.239 "data_offset": 2048, 00:35:11.239 "data_size": 63488 00:35:11.239 } 00:35:11.239 ] 00:35:11.239 }' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:11.239 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.498 "name": "raid_bdev1", 00:35:11.498 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:11.498 "strip_size_kb": 0, 00:35:11.498 "state": "online", 00:35:11.498 "raid_level": "raid1", 00:35:11.498 "superblock": true, 00:35:11.498 "num_base_bdevs": 4, 00:35:11.498 "num_base_bdevs_discovered": 3, 00:35:11.498 "num_base_bdevs_operational": 3, 00:35:11.498 "base_bdevs_list": [ 00:35:11.498 { 00:35:11.498 "name": "spare", 00:35:11.498 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:11.498 "is_configured": true, 00:35:11.498 "data_offset": 2048, 00:35:11.498 "data_size": 63488 00:35:11.498 }, 00:35:11.498 { 00:35:11.498 "name": null, 00:35:11.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.498 "is_configured": false, 00:35:11.498 "data_offset": 0, 00:35:11.498 "data_size": 63488 00:35:11.498 }, 00:35:11.498 { 00:35:11.498 "name": "BaseBdev3", 00:35:11.498 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:11.498 "is_configured": true, 00:35:11.498 "data_offset": 2048, 00:35:11.498 "data_size": 63488 00:35:11.498 }, 00:35:11.498 { 00:35:11.498 "name": "BaseBdev4", 00:35:11.498 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:11.498 "is_configured": true, 00:35:11.498 "data_offset": 2048, 00:35:11.498 "data_size": 63488 00:35:11.498 } 00:35:11.498 ] 00:35:11.498 }' 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.498 14:04:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.757 [2024-10-09 14:04:18.259804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:11.757 [2024-10-09 14:04:18.259834] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:11.757 [2024-10-09 14:04:18.259920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:11.757 [2024-10-09 14:04:18.260000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:11.757 [2024-10-09 14:04:18.260022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.757 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.016 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:12.275 /dev/nbd0 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:12.275 1+0 records in 00:35:12.275 1+0 records out 00:35:12.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603069 s, 6.8 MB/s 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.275 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:35:12.534 /dev/nbd1 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:12.534 1+0 records in 00:35:12.534 1+0 records out 00:35:12.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408227 s, 10.0 MB/s 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:12.534 14:04:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:12.793 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.052 [2024-10-09 14:04:19.572836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:13.052 [2024-10-09 14:04:19.572897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:13.052 [2024-10-09 14:04:19.572918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:13.052 [2024-10-09 14:04:19.572937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:13.052 [2024-10-09 14:04:19.575469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:13.052 [2024-10-09 14:04:19.575514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:13.052 [2024-10-09 14:04:19.575609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:13.052 [2024-10-09 14:04:19.575658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:13.052 [2024-10-09 14:04:19.575760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:13.052 [2024-10-09 14:04:19.575850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:13.052 spare 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.052 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.311 [2024-10-09 14:04:19.675924] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:35:13.311 [2024-10-09 14:04:19.675956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:13.311 [2024-10-09 14:04:19.676258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:35:13.311 [2024-10-09 14:04:19.676403] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:35:13.311 [2024-10-09 14:04:19.676413] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:35:13.311 [2024-10-09 14:04:19.676577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:13.311 "name": "raid_bdev1", 00:35:13.311 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:13.311 "strip_size_kb": 0, 00:35:13.311 "state": "online", 00:35:13.311 "raid_level": "raid1", 00:35:13.311 "superblock": true, 00:35:13.311 "num_base_bdevs": 4, 00:35:13.311 "num_base_bdevs_discovered": 3, 00:35:13.311 "num_base_bdevs_operational": 3, 00:35:13.311 "base_bdevs_list": [ 00:35:13.311 { 00:35:13.311 "name": "spare", 00:35:13.311 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:13.311 "is_configured": true, 00:35:13.311 "data_offset": 2048, 00:35:13.311 "data_size": 63488 00:35:13.311 }, 00:35:13.311 { 00:35:13.311 "name": null, 00:35:13.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.311 "is_configured": false, 00:35:13.311 "data_offset": 2048, 00:35:13.311 "data_size": 63488 00:35:13.311 }, 00:35:13.311 { 00:35:13.311 "name": "BaseBdev3", 00:35:13.311 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:13.311 "is_configured": true, 00:35:13.311 "data_offset": 2048, 00:35:13.311 "data_size": 63488 00:35:13.311 }, 00:35:13.311 { 00:35:13.311 "name": "BaseBdev4", 00:35:13.311 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:13.311 "is_configured": true, 00:35:13.311 "data_offset": 2048, 00:35:13.311 "data_size": 63488 00:35:13.311 } 00:35:13.311 ] 00:35:13.311 }' 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:13.311 14:04:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.570 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:13.830 "name": "raid_bdev1", 00:35:13.830 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:13.830 "strip_size_kb": 0, 00:35:13.830 "state": "online", 00:35:13.830 "raid_level": "raid1", 00:35:13.830 "superblock": true, 00:35:13.830 "num_base_bdevs": 4, 00:35:13.830 "num_base_bdevs_discovered": 3, 00:35:13.830 "num_base_bdevs_operational": 3, 00:35:13.830 "base_bdevs_list": [ 00:35:13.830 { 00:35:13.830 "name": "spare", 00:35:13.830 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:13.830 "is_configured": true, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": null, 00:35:13.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.830 "is_configured": false, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": "BaseBdev3", 00:35:13.830 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:13.830 "is_configured": true, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": "BaseBdev4", 00:35:13.830 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:13.830 "is_configured": true, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 } 00:35:13.830 ] 00:35:13.830 }' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.830 [2024-10-09 14:04:20.273059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:13.830 "name": "raid_bdev1", 00:35:13.830 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:13.830 "strip_size_kb": 0, 00:35:13.830 "state": "online", 00:35:13.830 "raid_level": "raid1", 00:35:13.830 "superblock": true, 00:35:13.830 "num_base_bdevs": 4, 00:35:13.830 "num_base_bdevs_discovered": 2, 00:35:13.830 "num_base_bdevs_operational": 2, 00:35:13.830 "base_bdevs_list": [ 00:35:13.830 { 00:35:13.830 "name": null, 00:35:13.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.830 "is_configured": false, 00:35:13.830 "data_offset": 0, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": null, 00:35:13.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.830 "is_configured": false, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": "BaseBdev3", 00:35:13.830 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:13.830 "is_configured": true, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 }, 00:35:13.830 { 00:35:13.830 "name": "BaseBdev4", 00:35:13.830 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:13.830 "is_configured": true, 00:35:13.830 "data_offset": 2048, 00:35:13.830 "data_size": 63488 00:35:13.830 } 00:35:13.830 ] 00:35:13.830 }' 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:13.830 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.398 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:14.398 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:14.398 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.398 [2024-10-09 14:04:20.737216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:14.398 [2024-10-09 14:04:20.737403] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:14.398 [2024-10-09 14:04:20.737426] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:14.398 [2024-10-09 14:04:20.737466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:14.398 [2024-10-09 14:04:20.740774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:35:14.398 [2024-10-09 14:04:20.743036] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:14.398 14:04:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:14.398 14:04:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:15.350 "name": "raid_bdev1", 00:35:15.350 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:15.350 "strip_size_kb": 0, 00:35:15.350 "state": "online", 00:35:15.350 "raid_level": "raid1", 00:35:15.350 "superblock": true, 00:35:15.350 "num_base_bdevs": 4, 00:35:15.350 "num_base_bdevs_discovered": 3, 00:35:15.350 "num_base_bdevs_operational": 3, 00:35:15.350 "process": { 00:35:15.350 "type": "rebuild", 00:35:15.350 "target": "spare", 00:35:15.350 "progress": { 00:35:15.350 "blocks": 20480, 00:35:15.350 "percent": 32 00:35:15.350 } 00:35:15.350 }, 00:35:15.350 "base_bdevs_list": [ 00:35:15.350 { 00:35:15.350 "name": "spare", 00:35:15.350 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:15.350 "is_configured": true, 00:35:15.350 "data_offset": 2048, 00:35:15.350 "data_size": 63488 00:35:15.350 }, 00:35:15.350 { 00:35:15.350 "name": null, 00:35:15.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.350 "is_configured": false, 00:35:15.350 "data_offset": 2048, 00:35:15.350 "data_size": 63488 00:35:15.350 }, 00:35:15.350 { 00:35:15.350 "name": "BaseBdev3", 00:35:15.350 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:15.350 "is_configured": true, 00:35:15.350 "data_offset": 2048, 00:35:15.350 "data_size": 63488 00:35:15.350 }, 00:35:15.350 { 00:35:15.350 "name": "BaseBdev4", 00:35:15.350 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:15.350 "is_configured": true, 00:35:15.350 "data_offset": 2048, 00:35:15.350 "data_size": 63488 00:35:15.350 } 00:35:15.350 ] 00:35:15.350 }' 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.350 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.609 [2024-10-09 14:04:21.896081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:15.609 [2024-10-09 14:04:21.949400] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:15.609 [2024-10-09 14:04:21.949609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:15.609 [2024-10-09 14:04:21.949718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:15.609 [2024-10-09 14:04:21.949763] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.609 14:04:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.609 14:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:15.609 "name": "raid_bdev1", 00:35:15.609 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:15.609 "strip_size_kb": 0, 00:35:15.609 "state": "online", 00:35:15.609 "raid_level": "raid1", 00:35:15.609 "superblock": true, 00:35:15.609 "num_base_bdevs": 4, 00:35:15.609 "num_base_bdevs_discovered": 2, 00:35:15.609 "num_base_bdevs_operational": 2, 00:35:15.609 "base_bdevs_list": [ 00:35:15.609 { 00:35:15.609 "name": null, 00:35:15.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.609 "is_configured": false, 00:35:15.609 "data_offset": 0, 00:35:15.609 "data_size": 63488 00:35:15.609 }, 00:35:15.609 { 00:35:15.609 "name": null, 00:35:15.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.609 "is_configured": false, 00:35:15.609 "data_offset": 2048, 00:35:15.609 "data_size": 63488 00:35:15.609 }, 00:35:15.609 { 00:35:15.609 "name": "BaseBdev3", 00:35:15.609 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:15.609 "is_configured": true, 00:35:15.609 "data_offset": 2048, 00:35:15.609 "data_size": 63488 00:35:15.609 }, 00:35:15.609 { 00:35:15.609 "name": "BaseBdev4", 00:35:15.609 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:15.609 "is_configured": true, 00:35:15.609 "data_offset": 2048, 00:35:15.609 "data_size": 63488 00:35:15.609 } 00:35:15.609 ] 00:35:15.609 }' 00:35:15.609 14:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:15.609 14:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.868 14:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:15.868 14:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:15.868 14:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.868 [2024-10-09 14:04:22.409655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:15.868 [2024-10-09 14:04:22.409835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:15.868 [2024-10-09 14:04:22.409868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:35:15.868 [2024-10-09 14:04:22.409884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:15.868 [2024-10-09 14:04:22.410342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:15.868 [2024-10-09 14:04:22.410365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:15.868 [2024-10-09 14:04:22.410446] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:15.868 [2024-10-09 14:04:22.410467] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:15.869 [2024-10-09 14:04:22.410479] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:15.869 [2024-10-09 14:04:22.410506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:15.869 [2024-10-09 14:04:22.413722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:35:15.869 spare 00:35:15.869 14:04:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:15.869 14:04:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:35:15.869 [2024-10-09 14:04:22.416031] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.247 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:17.247 "name": "raid_bdev1", 00:35:17.247 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:17.247 "strip_size_kb": 0, 00:35:17.247 "state": "online", 00:35:17.247 "raid_level": "raid1", 00:35:17.247 "superblock": true, 00:35:17.247 "num_base_bdevs": 4, 00:35:17.247 "num_base_bdevs_discovered": 3, 00:35:17.247 "num_base_bdevs_operational": 3, 00:35:17.247 "process": { 00:35:17.247 "type": "rebuild", 00:35:17.247 "target": "spare", 00:35:17.247 "progress": { 00:35:17.247 "blocks": 20480, 00:35:17.247 "percent": 32 00:35:17.247 } 00:35:17.247 }, 00:35:17.247 "base_bdevs_list": [ 00:35:17.247 { 00:35:17.247 "name": "spare", 00:35:17.247 "uuid": "f2e1327e-e5e7-5a59-998b-acdabb5eedeb", 00:35:17.247 "is_configured": true, 00:35:17.247 "data_offset": 2048, 00:35:17.247 "data_size": 63488 00:35:17.247 }, 00:35:17.247 { 00:35:17.248 "name": null, 00:35:17.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.248 "is_configured": false, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 }, 00:35:17.248 { 00:35:17.248 "name": "BaseBdev3", 00:35:17.248 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:17.248 "is_configured": true, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 }, 00:35:17.248 { 00:35:17.248 "name": "BaseBdev4", 00:35:17.248 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:17.248 "is_configured": true, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 } 00:35:17.248 ] 00:35:17.248 }' 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.248 [2024-10-09 14:04:23.566759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:17.248 [2024-10-09 14:04:23.622329] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:17.248 [2024-10-09 14:04:23.622523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:17.248 [2024-10-09 14:04:23.622565] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:17.248 [2024-10-09 14:04:23.622576] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:17.248 "name": "raid_bdev1", 00:35:17.248 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:17.248 "strip_size_kb": 0, 00:35:17.248 "state": "online", 00:35:17.248 "raid_level": "raid1", 00:35:17.248 "superblock": true, 00:35:17.248 "num_base_bdevs": 4, 00:35:17.248 "num_base_bdevs_discovered": 2, 00:35:17.248 "num_base_bdevs_operational": 2, 00:35:17.248 "base_bdevs_list": [ 00:35:17.248 { 00:35:17.248 "name": null, 00:35:17.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.248 "is_configured": false, 00:35:17.248 "data_offset": 0, 00:35:17.248 "data_size": 63488 00:35:17.248 }, 00:35:17.248 { 00:35:17.248 "name": null, 00:35:17.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.248 "is_configured": false, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 }, 00:35:17.248 { 00:35:17.248 "name": "BaseBdev3", 00:35:17.248 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:17.248 "is_configured": true, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 }, 00:35:17.248 { 00:35:17.248 "name": "BaseBdev4", 00:35:17.248 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:17.248 "is_configured": true, 00:35:17.248 "data_offset": 2048, 00:35:17.248 "data_size": 63488 00:35:17.248 } 00:35:17.248 ] 00:35:17.248 }' 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:17.248 14:04:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:17.816 "name": "raid_bdev1", 00:35:17.816 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:17.816 "strip_size_kb": 0, 00:35:17.816 "state": "online", 00:35:17.816 "raid_level": "raid1", 00:35:17.816 "superblock": true, 00:35:17.816 "num_base_bdevs": 4, 00:35:17.816 "num_base_bdevs_discovered": 2, 00:35:17.816 "num_base_bdevs_operational": 2, 00:35:17.816 "base_bdevs_list": [ 00:35:17.816 { 00:35:17.816 "name": null, 00:35:17.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.816 "is_configured": false, 00:35:17.816 "data_offset": 0, 00:35:17.816 "data_size": 63488 00:35:17.816 }, 00:35:17.816 { 00:35:17.816 "name": null, 00:35:17.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:17.816 "is_configured": false, 00:35:17.816 "data_offset": 2048, 00:35:17.816 "data_size": 63488 00:35:17.816 }, 00:35:17.816 { 00:35:17.816 "name": "BaseBdev3", 00:35:17.816 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:17.816 "is_configured": true, 00:35:17.816 "data_offset": 2048, 00:35:17.816 "data_size": 63488 00:35:17.816 }, 00:35:17.816 { 00:35:17.816 "name": "BaseBdev4", 00:35:17.816 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:17.816 "is_configured": true, 00:35:17.816 "data_offset": 2048, 00:35:17.816 "data_size": 63488 00:35:17.816 } 00:35:17.816 ] 00:35:17.816 }' 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.816 [2024-10-09 14:04:24.234323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:17.816 [2024-10-09 14:04:24.234379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:17.816 [2024-10-09 14:04:24.234403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:35:17.816 [2024-10-09 14:04:24.234414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:17.816 [2024-10-09 14:04:24.234863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:17.816 [2024-10-09 14:04:24.234882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:17.816 [2024-10-09 14:04:24.234957] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:17.816 [2024-10-09 14:04:24.234978] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:17.816 [2024-10-09 14:04:24.234991] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:17.816 [2024-10-09 14:04:24.235002] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:35:17.816 BaseBdev1 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.816 14:04:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:18.752 "name": "raid_bdev1", 00:35:18.752 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:18.752 "strip_size_kb": 0, 00:35:18.752 "state": "online", 00:35:18.752 "raid_level": "raid1", 00:35:18.752 "superblock": true, 00:35:18.752 "num_base_bdevs": 4, 00:35:18.752 "num_base_bdevs_discovered": 2, 00:35:18.752 "num_base_bdevs_operational": 2, 00:35:18.752 "base_bdevs_list": [ 00:35:18.752 { 00:35:18.752 "name": null, 00:35:18.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:18.752 "is_configured": false, 00:35:18.752 "data_offset": 0, 00:35:18.752 "data_size": 63488 00:35:18.752 }, 00:35:18.752 { 00:35:18.752 "name": null, 00:35:18.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:18.752 "is_configured": false, 00:35:18.752 "data_offset": 2048, 00:35:18.752 "data_size": 63488 00:35:18.752 }, 00:35:18.752 { 00:35:18.752 "name": "BaseBdev3", 00:35:18.752 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:18.752 "is_configured": true, 00:35:18.752 "data_offset": 2048, 00:35:18.752 "data_size": 63488 00:35:18.752 }, 00:35:18.752 { 00:35:18.752 "name": "BaseBdev4", 00:35:18.752 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:18.752 "is_configured": true, 00:35:18.752 "data_offset": 2048, 00:35:18.752 "data_size": 63488 00:35:18.752 } 00:35:18.752 ] 00:35:18.752 }' 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:18.752 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:19.320 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:19.321 "name": "raid_bdev1", 00:35:19.321 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:19.321 "strip_size_kb": 0, 00:35:19.321 "state": "online", 00:35:19.321 "raid_level": "raid1", 00:35:19.321 "superblock": true, 00:35:19.321 "num_base_bdevs": 4, 00:35:19.321 "num_base_bdevs_discovered": 2, 00:35:19.321 "num_base_bdevs_operational": 2, 00:35:19.321 "base_bdevs_list": [ 00:35:19.321 { 00:35:19.321 "name": null, 00:35:19.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:19.321 "is_configured": false, 00:35:19.321 "data_offset": 0, 00:35:19.321 "data_size": 63488 00:35:19.321 }, 00:35:19.321 { 00:35:19.321 "name": null, 00:35:19.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:19.321 "is_configured": false, 00:35:19.321 "data_offset": 2048, 00:35:19.321 "data_size": 63488 00:35:19.321 }, 00:35:19.321 { 00:35:19.321 "name": "BaseBdev3", 00:35:19.321 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:19.321 "is_configured": true, 00:35:19.321 "data_offset": 2048, 00:35:19.321 "data_size": 63488 00:35:19.321 }, 00:35:19.321 { 00:35:19.321 "name": "BaseBdev4", 00:35:19.321 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:19.321 "is_configured": true, 00:35:19.321 "data_offset": 2048, 00:35:19.321 "data_size": 63488 00:35:19.321 } 00:35:19.321 ] 00:35:19.321 }' 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:19.321 [2024-10-09 14:04:25.826713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:19.321 [2024-10-09 14:04:25.826864] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:19.321 [2024-10-09 14:04:25.826886] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:19.321 request: 00:35:19.321 { 00:35:19.321 "base_bdev": "BaseBdev1", 00:35:19.321 "raid_bdev": "raid_bdev1", 00:35:19.321 "method": "bdev_raid_add_base_bdev", 00:35:19.321 "req_id": 1 00:35:19.321 } 00:35:19.321 Got JSON-RPC error response 00:35:19.321 response: 00:35:19.321 { 00:35:19.321 "code": -22, 00:35:19.321 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:19.321 } 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:19.321 14:04:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:20.698 "name": "raid_bdev1", 00:35:20.698 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:20.698 "strip_size_kb": 0, 00:35:20.698 "state": "online", 00:35:20.698 "raid_level": "raid1", 00:35:20.698 "superblock": true, 00:35:20.698 "num_base_bdevs": 4, 00:35:20.698 "num_base_bdevs_discovered": 2, 00:35:20.698 "num_base_bdevs_operational": 2, 00:35:20.698 "base_bdevs_list": [ 00:35:20.698 { 00:35:20.698 "name": null, 00:35:20.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.698 "is_configured": false, 00:35:20.698 "data_offset": 0, 00:35:20.698 "data_size": 63488 00:35:20.698 }, 00:35:20.698 { 00:35:20.698 "name": null, 00:35:20.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.698 "is_configured": false, 00:35:20.698 "data_offset": 2048, 00:35:20.698 "data_size": 63488 00:35:20.698 }, 00:35:20.698 { 00:35:20.698 "name": "BaseBdev3", 00:35:20.698 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:20.698 "is_configured": true, 00:35:20.698 "data_offset": 2048, 00:35:20.698 "data_size": 63488 00:35:20.698 }, 00:35:20.698 { 00:35:20.698 "name": "BaseBdev4", 00:35:20.698 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:20.698 "is_configured": true, 00:35:20.698 "data_offset": 2048, 00:35:20.698 "data_size": 63488 00:35:20.698 } 00:35:20.698 ] 00:35:20.698 }' 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:20.698 14:04:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:20.958 "name": "raid_bdev1", 00:35:20.958 "uuid": "65fe15be-d7ec-4642-99fd-bffaf62eeb5b", 00:35:20.958 "strip_size_kb": 0, 00:35:20.958 "state": "online", 00:35:20.958 "raid_level": "raid1", 00:35:20.958 "superblock": true, 00:35:20.958 "num_base_bdevs": 4, 00:35:20.958 "num_base_bdevs_discovered": 2, 00:35:20.958 "num_base_bdevs_operational": 2, 00:35:20.958 "base_bdevs_list": [ 00:35:20.958 { 00:35:20.958 "name": null, 00:35:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.958 "is_configured": false, 00:35:20.958 "data_offset": 0, 00:35:20.958 "data_size": 63488 00:35:20.958 }, 00:35:20.958 { 00:35:20.958 "name": null, 00:35:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:20.958 "is_configured": false, 00:35:20.958 "data_offset": 2048, 00:35:20.958 "data_size": 63488 00:35:20.958 }, 00:35:20.958 { 00:35:20.958 "name": "BaseBdev3", 00:35:20.958 "uuid": "2a6979d1-990c-594d-a1c2-3029174d2a18", 00:35:20.958 "is_configured": true, 00:35:20.958 "data_offset": 2048, 00:35:20.958 "data_size": 63488 00:35:20.958 }, 00:35:20.958 { 00:35:20.958 "name": "BaseBdev4", 00:35:20.958 "uuid": "ae209092-fd2b-542c-857c-7f443d55fc91", 00:35:20.958 "is_configured": true, 00:35:20.958 "data_offset": 2048, 00:35:20.958 "data_size": 63488 00:35:20.958 } 00:35:20.958 ] 00:35:20.958 }' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 89012 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 89012 ']' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 89012 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89012 00:35:20.958 killing process with pid 89012 00:35:20.958 Received shutdown signal, test time was about 60.000000 seconds 00:35:20.958 00:35:20.958 Latency(us) 00:35:20.958 [2024-10-09T14:04:27.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.958 [2024-10-09T14:04:27.509Z] =================================================================================================================== 00:35:20.958 [2024-10-09T14:04:27.509Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89012' 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 89012 00:35:20.958 [2024-10-09 14:04:27.467705] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:20.958 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 89012 00:35:20.958 [2024-10-09 14:04:27.467819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:20.958 [2024-10-09 14:04:27.467884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:20.958 [2024-10-09 14:04:27.467898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:35:21.227 [2024-10-09 14:04:27.519297] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:21.227 14:04:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:35:21.227 00:35:21.227 real 0m24.743s 00:35:21.227 user 0m29.823s 00:35:21.227 sys 0m4.682s 00:35:21.227 ************************************ 00:35:21.227 END TEST raid_rebuild_test_sb 00:35:21.227 ************************************ 00:35:21.227 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:21.227 14:04:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:21.489 14:04:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:35:21.489 14:04:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:21.489 14:04:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:21.489 14:04:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:21.489 ************************************ 00:35:21.489 START TEST raid_rebuild_test_io 00:35:21.489 ************************************ 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89759 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89759 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89759 ']' 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:21.489 14:04:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:21.489 [2024-10-09 14:04:27.953392] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:21.489 [2024-10-09 14:04:27.954631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:35:21.489 Zero copy mechanism will not be used. 00:35:21.489 -allocations --file-prefix=spdk_pid89759 ] 00:35:21.748 [2024-10-09 14:04:28.135144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.748 [2024-10-09 14:04:28.180354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.748 [2024-10-09 14:04:28.224506] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:21.748 [2024-10-09 14:04:28.224544] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.684 BaseBdev1_malloc 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.684 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:28.929180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:22.685 [2024-10-09 14:04:28.929246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.685 [2024-10-09 14:04:28.929273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:22.685 [2024-10-09 14:04:28.929299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.685 [2024-10-09 14:04:28.931857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.685 [2024-10-09 14:04:28.932026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:22.685 BaseBdev1 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 BaseBdev2_malloc 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:28.968512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:22.685 [2024-10-09 14:04:28.968597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.685 [2024-10-09 14:04:28.968633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:22.685 [2024-10-09 14:04:28.968651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.685 [2024-10-09 14:04:28.971514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.685 [2024-10-09 14:04:28.971681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:22.685 BaseBdev2 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 BaseBdev3_malloc 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:28.993933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:22.685 [2024-10-09 14:04:28.994085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.685 [2024-10-09 14:04:28.994169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:22.685 [2024-10-09 14:04:28.994249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.685 [2024-10-09 14:04:28.996758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.685 [2024-10-09 14:04:28.996912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:22.685 BaseBdev3 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 BaseBdev4_malloc 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:29.023209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:22.685 [2024-10-09 14:04:29.023366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.685 [2024-10-09 14:04:29.023402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:22.685 [2024-10-09 14:04:29.023414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.685 [2024-10-09 14:04:29.026109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.685 [2024-10-09 14:04:29.026149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:22.685 BaseBdev4 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 spare_malloc 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 spare_delay 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:29.064602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:22.685 [2024-10-09 14:04:29.064655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.685 [2024-10-09 14:04:29.064679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:22.685 [2024-10-09 14:04:29.064691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.685 [2024-10-09 14:04:29.067256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.685 [2024-10-09 14:04:29.067306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:22.685 spare 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 [2024-10-09 14:04:29.076691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:22.685 [2024-10-09 14:04:29.079058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:22.685 [2024-10-09 14:04:29.079129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:22.685 [2024-10-09 14:04:29.079171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:22.685 [2024-10-09 14:04:29.079251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:35:22.685 [2024-10-09 14:04:29.079262] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:35:22.685 [2024-10-09 14:04:29.079540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:22.685 [2024-10-09 14:04:29.079697] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:35:22.685 [2024-10-09 14:04:29.079718] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:35:22.685 [2024-10-09 14:04:29.079845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.685 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:22.685 "name": "raid_bdev1", 00:35:22.685 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:22.685 "strip_size_kb": 0, 00:35:22.685 "state": "online", 00:35:22.685 "raid_level": "raid1", 00:35:22.685 "superblock": false, 00:35:22.685 "num_base_bdevs": 4, 00:35:22.685 "num_base_bdevs_discovered": 4, 00:35:22.685 "num_base_bdevs_operational": 4, 00:35:22.685 "base_bdevs_list": [ 00:35:22.685 { 00:35:22.685 "name": "BaseBdev1", 00:35:22.685 "uuid": "7591f5b4-1bc8-5eeb-95d0-344b710b807f", 00:35:22.685 "is_configured": true, 00:35:22.685 "data_offset": 0, 00:35:22.685 "data_size": 65536 00:35:22.685 }, 00:35:22.685 { 00:35:22.685 "name": "BaseBdev2", 00:35:22.685 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:22.685 "is_configured": true, 00:35:22.685 "data_offset": 0, 00:35:22.686 "data_size": 65536 00:35:22.686 }, 00:35:22.686 { 00:35:22.686 "name": "BaseBdev3", 00:35:22.686 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:22.686 "is_configured": true, 00:35:22.686 "data_offset": 0, 00:35:22.686 "data_size": 65536 00:35:22.686 }, 00:35:22.686 { 00:35:22.686 "name": "BaseBdev4", 00:35:22.686 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:22.686 "is_configured": true, 00:35:22.686 "data_offset": 0, 00:35:22.686 "data_size": 65536 00:35:22.686 } 00:35:22.686 ] 00:35:22.686 }' 00:35:22.686 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:22.686 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.253 [2024-10-09 14:04:29.549110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.253 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.254 [2024-10-09 14:04:29.640788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:23.254 "name": "raid_bdev1", 00:35:23.254 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:23.254 "strip_size_kb": 0, 00:35:23.254 "state": "online", 00:35:23.254 "raid_level": "raid1", 00:35:23.254 "superblock": false, 00:35:23.254 "num_base_bdevs": 4, 00:35:23.254 "num_base_bdevs_discovered": 3, 00:35:23.254 "num_base_bdevs_operational": 3, 00:35:23.254 "base_bdevs_list": [ 00:35:23.254 { 00:35:23.254 "name": null, 00:35:23.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:23.254 "is_configured": false, 00:35:23.254 "data_offset": 0, 00:35:23.254 "data_size": 65536 00:35:23.254 }, 00:35:23.254 { 00:35:23.254 "name": "BaseBdev2", 00:35:23.254 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:23.254 "is_configured": true, 00:35:23.254 "data_offset": 0, 00:35:23.254 "data_size": 65536 00:35:23.254 }, 00:35:23.254 { 00:35:23.254 "name": "BaseBdev3", 00:35:23.254 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:23.254 "is_configured": true, 00:35:23.254 "data_offset": 0, 00:35:23.254 "data_size": 65536 00:35:23.254 }, 00:35:23.254 { 00:35:23.254 "name": "BaseBdev4", 00:35:23.254 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:23.254 "is_configured": true, 00:35:23.254 "data_offset": 0, 00:35:23.254 "data_size": 65536 00:35:23.254 } 00:35:23.254 ] 00:35:23.254 }' 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:23.254 14:04:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.254 [2024-10-09 14:04:29.758866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:23.254 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:23.254 Zero copy mechanism will not be used. 00:35:23.254 Running I/O for 60 seconds... 00:35:23.848 14:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:23.848 14:04:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:23.848 14:04:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:23.848 [2024-10-09 14:04:30.094502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:23.848 14:04:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.848 14:04:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:23.848 [2024-10-09 14:04:30.165453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:23.848 [2024-10-09 14:04:30.168042] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:23.848 [2024-10-09 14:04:30.270739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:23.848 [2024-10-09 14:04:30.271472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:24.106 [2024-10-09 14:04:30.482219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:24.106 [2024-10-09 14:04:30.482479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:24.365 147.00 IOPS, 441.00 MiB/s [2024-10-09T14:04:30.916Z] [2024-10-09 14:04:30.838819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:24.624 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:24.624 [2024-10-09 14:04:31.171826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:24.624 [2024-10-09 14:04:31.172148] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:24.883 "name": "raid_bdev1", 00:35:24.883 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:24.883 "strip_size_kb": 0, 00:35:24.883 "state": "online", 00:35:24.883 "raid_level": "raid1", 00:35:24.883 "superblock": false, 00:35:24.883 "num_base_bdevs": 4, 00:35:24.883 "num_base_bdevs_discovered": 4, 00:35:24.883 "num_base_bdevs_operational": 4, 00:35:24.883 "process": { 00:35:24.883 "type": "rebuild", 00:35:24.883 "target": "spare", 00:35:24.883 "progress": { 00:35:24.883 "blocks": 14336, 00:35:24.883 "percent": 21 00:35:24.883 } 00:35:24.883 }, 00:35:24.883 "base_bdevs_list": [ 00:35:24.883 { 00:35:24.883 "name": "spare", 00:35:24.883 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:24.883 "is_configured": true, 00:35:24.883 "data_offset": 0, 00:35:24.883 "data_size": 65536 00:35:24.883 }, 00:35:24.883 { 00:35:24.883 "name": "BaseBdev2", 00:35:24.883 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:24.883 "is_configured": true, 00:35:24.883 "data_offset": 0, 00:35:24.883 "data_size": 65536 00:35:24.883 }, 00:35:24.883 { 00:35:24.883 "name": "BaseBdev3", 00:35:24.883 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:24.883 "is_configured": true, 00:35:24.883 "data_offset": 0, 00:35:24.883 "data_size": 65536 00:35:24.883 }, 00:35:24.883 { 00:35:24.883 "name": "BaseBdev4", 00:35:24.883 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:24.883 "is_configured": true, 00:35:24.883 "data_offset": 0, 00:35:24.883 "data_size": 65536 00:35:24.883 } 00:35:24.883 ] 00:35:24.883 }' 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:24.883 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:24.883 [2024-10-09 14:04:31.287478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:24.883 [2024-10-09 14:04:31.401813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:35:24.883 [2024-10-09 14:04:31.403125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:35:25.142 [2024-10-09 14:04:31.510863] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:25.142 [2024-10-09 14:04:31.514138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.142 [2024-10-09 14:04:31.514282] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:25.142 [2024-10-09 14:04:31.514332] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:25.142 [2024-10-09 14:04:31.531813] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.142 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:25.142 "name": "raid_bdev1", 00:35:25.142 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:25.142 "strip_size_kb": 0, 00:35:25.142 "state": "online", 00:35:25.142 "raid_level": "raid1", 00:35:25.142 "superblock": false, 00:35:25.142 "num_base_bdevs": 4, 00:35:25.142 "num_base_bdevs_discovered": 3, 00:35:25.142 "num_base_bdevs_operational": 3, 00:35:25.142 "base_bdevs_list": [ 00:35:25.142 { 00:35:25.143 "name": null, 00:35:25.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.143 "is_configured": false, 00:35:25.143 "data_offset": 0, 00:35:25.143 "data_size": 65536 00:35:25.143 }, 00:35:25.143 { 00:35:25.143 "name": "BaseBdev2", 00:35:25.143 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:25.143 "is_configured": true, 00:35:25.143 "data_offset": 0, 00:35:25.143 "data_size": 65536 00:35:25.143 }, 00:35:25.143 { 00:35:25.143 "name": "BaseBdev3", 00:35:25.143 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:25.143 "is_configured": true, 00:35:25.143 "data_offset": 0, 00:35:25.143 "data_size": 65536 00:35:25.143 }, 00:35:25.143 { 00:35:25.143 "name": "BaseBdev4", 00:35:25.143 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:25.143 "is_configured": true, 00:35:25.143 "data_offset": 0, 00:35:25.143 "data_size": 65536 00:35:25.143 } 00:35:25.143 ] 00:35:25.143 }' 00:35:25.143 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:25.143 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:25.661 133.50 IOPS, 400.50 MiB/s [2024-10-09T14:04:32.212Z] 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:25.661 14:04:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:25.661 "name": "raid_bdev1", 00:35:25.661 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:25.661 "strip_size_kb": 0, 00:35:25.661 "state": "online", 00:35:25.661 "raid_level": "raid1", 00:35:25.661 "superblock": false, 00:35:25.661 "num_base_bdevs": 4, 00:35:25.661 "num_base_bdevs_discovered": 3, 00:35:25.661 "num_base_bdevs_operational": 3, 00:35:25.661 "base_bdevs_list": [ 00:35:25.661 { 00:35:25.661 "name": null, 00:35:25.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.661 "is_configured": false, 00:35:25.661 "data_offset": 0, 00:35:25.661 "data_size": 65536 00:35:25.661 }, 00:35:25.661 { 00:35:25.661 "name": "BaseBdev2", 00:35:25.661 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:25.661 "is_configured": true, 00:35:25.661 "data_offset": 0, 00:35:25.661 "data_size": 65536 00:35:25.661 }, 00:35:25.661 { 00:35:25.661 "name": "BaseBdev3", 00:35:25.661 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:25.661 "is_configured": true, 00:35:25.661 "data_offset": 0, 00:35:25.661 "data_size": 65536 00:35:25.661 }, 00:35:25.661 { 00:35:25.661 "name": "BaseBdev4", 00:35:25.661 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:25.661 "is_configured": true, 00:35:25.661 "data_offset": 0, 00:35:25.661 "data_size": 65536 00:35:25.661 } 00:35:25.661 ] 00:35:25.661 }' 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:25.661 [2024-10-09 14:04:32.152453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.661 14:04:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:25.920 [2024-10-09 14:04:32.230348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:25.920 [2024-10-09 14:04:32.232854] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:25.920 [2024-10-09 14:04:32.353783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:25.920 [2024-10-09 14:04:32.355184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:26.179 [2024-10-09 14:04:32.596640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:26.438 152.67 IOPS, 458.00 MiB/s [2024-10-09T14:04:32.989Z] [2024-10-09 14:04:32.927750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:26.696 [2024-10-09 14:04:33.046586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:26.697 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:26.956 "name": "raid_bdev1", 00:35:26.956 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:26.956 "strip_size_kb": 0, 00:35:26.956 "state": "online", 00:35:26.956 "raid_level": "raid1", 00:35:26.956 "superblock": false, 00:35:26.956 "num_base_bdevs": 4, 00:35:26.956 "num_base_bdevs_discovered": 4, 00:35:26.956 "num_base_bdevs_operational": 4, 00:35:26.956 "process": { 00:35:26.956 "type": "rebuild", 00:35:26.956 "target": "spare", 00:35:26.956 "progress": { 00:35:26.956 "blocks": 12288, 00:35:26.956 "percent": 18 00:35:26.956 } 00:35:26.956 }, 00:35:26.956 "base_bdevs_list": [ 00:35:26.956 { 00:35:26.956 "name": "spare", 00:35:26.956 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:26.956 "is_configured": true, 00:35:26.956 "data_offset": 0, 00:35:26.956 "data_size": 65536 00:35:26.956 }, 00:35:26.956 { 00:35:26.956 "name": "BaseBdev2", 00:35:26.956 "uuid": "dfa611d8-801b-5b14-8f2e-56d894609634", 00:35:26.956 "is_configured": true, 00:35:26.956 "data_offset": 0, 00:35:26.956 "data_size": 65536 00:35:26.956 }, 00:35:26.956 { 00:35:26.956 "name": "BaseBdev3", 00:35:26.956 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:26.956 "is_configured": true, 00:35:26.956 "data_offset": 0, 00:35:26.956 "data_size": 65536 00:35:26.956 }, 00:35:26.956 { 00:35:26.956 "name": "BaseBdev4", 00:35:26.956 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:26.956 "is_configured": true, 00:35:26.956 "data_offset": 0, 00:35:26.956 "data_size": 65536 00:35:26.956 } 00:35:26.956 ] 00:35:26.956 }' 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:26.956 [2024-10-09 14:04:33.309828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:26.956 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:26.956 [2024-10-09 14:04:33.341687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:26.956 [2024-10-09 14:04:33.418125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:26.956 [2024-10-09 14:04:33.418823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:27.215 [2024-10-09 14:04:33.532199] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:35:27.216 [2024-10-09 14:04:33.532227] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:27.216 "name": "raid_bdev1", 00:35:27.216 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:27.216 "strip_size_kb": 0, 00:35:27.216 "state": "online", 00:35:27.216 "raid_level": "raid1", 00:35:27.216 "superblock": false, 00:35:27.216 "num_base_bdevs": 4, 00:35:27.216 "num_base_bdevs_discovered": 3, 00:35:27.216 "num_base_bdevs_operational": 3, 00:35:27.216 "process": { 00:35:27.216 "type": "rebuild", 00:35:27.216 "target": "spare", 00:35:27.216 "progress": { 00:35:27.216 "blocks": 16384, 00:35:27.216 "percent": 25 00:35:27.216 } 00:35:27.216 }, 00:35:27.216 "base_bdevs_list": [ 00:35:27.216 { 00:35:27.216 "name": "spare", 00:35:27.216 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": null, 00:35:27.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.216 "is_configured": false, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": "BaseBdev3", 00:35:27.216 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": "BaseBdev4", 00:35:27.216 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 } 00:35:27.216 ] 00:35:27.216 }' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:27.216 "name": "raid_bdev1", 00:35:27.216 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:27.216 "strip_size_kb": 0, 00:35:27.216 "state": "online", 00:35:27.216 "raid_level": "raid1", 00:35:27.216 "superblock": false, 00:35:27.216 "num_base_bdevs": 4, 00:35:27.216 "num_base_bdevs_discovered": 3, 00:35:27.216 "num_base_bdevs_operational": 3, 00:35:27.216 "process": { 00:35:27.216 "type": "rebuild", 00:35:27.216 "target": "spare", 00:35:27.216 "progress": { 00:35:27.216 "blocks": 18432, 00:35:27.216 "percent": 28 00:35:27.216 } 00:35:27.216 }, 00:35:27.216 "base_bdevs_list": [ 00:35:27.216 { 00:35:27.216 "name": "spare", 00:35:27.216 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": null, 00:35:27.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.216 "is_configured": false, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": "BaseBdev3", 00:35:27.216 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 }, 00:35:27.216 { 00:35:27.216 "name": "BaseBdev4", 00:35:27.216 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:27.216 "is_configured": true, 00:35:27.216 "data_offset": 0, 00:35:27.216 "data_size": 65536 00:35:27.216 } 00:35:27.216 ] 00:35:27.216 }' 00:35:27.216 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:27.216 [2024-10-09 14:04:33.757800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:35:27.475 133.00 IOPS, 399.00 MiB/s [2024-10-09T14:04:34.026Z] 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:27.475 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:27.475 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:27.475 14:04:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:27.734 [2024-10-09 14:04:34.123617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:35:27.993 [2024-10-09 14:04:34.332700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:35:27.993 [2024-10-09 14:04:34.333117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:35:28.252 [2024-10-09 14:04:34.659082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:35:28.511 118.60 IOPS, 355.80 MiB/s [2024-10-09T14:04:35.062Z] 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:28.511 "name": "raid_bdev1", 00:35:28.511 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:28.511 "strip_size_kb": 0, 00:35:28.511 "state": "online", 00:35:28.511 "raid_level": "raid1", 00:35:28.511 "superblock": false, 00:35:28.511 "num_base_bdevs": 4, 00:35:28.511 "num_base_bdevs_discovered": 3, 00:35:28.511 "num_base_bdevs_operational": 3, 00:35:28.511 "process": { 00:35:28.511 "type": "rebuild", 00:35:28.511 "target": "spare", 00:35:28.511 "progress": { 00:35:28.511 "blocks": 32768, 00:35:28.511 "percent": 50 00:35:28.511 } 00:35:28.511 }, 00:35:28.511 "base_bdevs_list": [ 00:35:28.511 { 00:35:28.511 "name": "spare", 00:35:28.511 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:28.511 "is_configured": true, 00:35:28.511 "data_offset": 0, 00:35:28.511 "data_size": 65536 00:35:28.511 }, 00:35:28.511 { 00:35:28.511 "name": null, 00:35:28.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:28.511 "is_configured": false, 00:35:28.511 "data_offset": 0, 00:35:28.511 "data_size": 65536 00:35:28.511 }, 00:35:28.511 { 00:35:28.511 "name": "BaseBdev3", 00:35:28.511 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:28.511 "is_configured": true, 00:35:28.511 "data_offset": 0, 00:35:28.511 "data_size": 65536 00:35:28.511 }, 00:35:28.511 { 00:35:28.511 "name": "BaseBdev4", 00:35:28.511 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:28.511 "is_configured": true, 00:35:28.511 "data_offset": 0, 00:35:28.511 "data_size": 65536 00:35:28.511 } 00:35:28.511 ] 00:35:28.511 }' 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.511 14:04:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:29.079 [2024-10-09 14:04:35.462038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:35:29.598 105.67 IOPS, 317.00 MiB/s [2024-10-09T14:04:36.149Z] 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.598 14:04:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:29.598 "name": "raid_bdev1", 00:35:29.598 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:29.598 "strip_size_kb": 0, 00:35:29.598 "state": "online", 00:35:29.598 "raid_level": "raid1", 00:35:29.598 "superblock": false, 00:35:29.598 "num_base_bdevs": 4, 00:35:29.598 "num_base_bdevs_discovered": 3, 00:35:29.598 "num_base_bdevs_operational": 3, 00:35:29.598 "process": { 00:35:29.598 "type": "rebuild", 00:35:29.598 "target": "spare", 00:35:29.598 "progress": { 00:35:29.598 "blocks": 53248, 00:35:29.598 "percent": 81 00:35:29.598 } 00:35:29.598 }, 00:35:29.598 "base_bdevs_list": [ 00:35:29.598 { 00:35:29.598 "name": "spare", 00:35:29.598 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:29.598 "is_configured": true, 00:35:29.598 "data_offset": 0, 00:35:29.598 "data_size": 65536 00:35:29.598 }, 00:35:29.598 { 00:35:29.598 "name": null, 00:35:29.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:29.598 "is_configured": false, 00:35:29.598 "data_offset": 0, 00:35:29.598 "data_size": 65536 00:35:29.598 }, 00:35:29.598 { 00:35:29.598 "name": "BaseBdev3", 00:35:29.598 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:29.598 "is_configured": true, 00:35:29.598 "data_offset": 0, 00:35:29.598 "data_size": 65536 00:35:29.598 }, 00:35:29.598 { 00:35:29.598 "name": "BaseBdev4", 00:35:29.598 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:29.598 "is_configured": true, 00:35:29.598 "data_offset": 0, 00:35:29.598 "data_size": 65536 00:35:29.598 } 00:35:29.598 ] 00:35:29.598 }' 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:29.598 14:04:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:29.856 [2024-10-09 14:04:36.247315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:35:30.115 [2024-10-09 14:04:36.576156] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:30.374 [2024-10-09 14:04:36.676173] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:30.374 [2024-10-09 14:04:36.685088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:30.633 97.00 IOPS, 291.00 MiB/s [2024-10-09T14:04:37.184Z] 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.633 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:30.893 "name": "raid_bdev1", 00:35:30.893 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:30.893 "strip_size_kb": 0, 00:35:30.893 "state": "online", 00:35:30.893 "raid_level": "raid1", 00:35:30.893 "superblock": false, 00:35:30.893 "num_base_bdevs": 4, 00:35:30.893 "num_base_bdevs_discovered": 3, 00:35:30.893 "num_base_bdevs_operational": 3, 00:35:30.893 "base_bdevs_list": [ 00:35:30.893 { 00:35:30.893 "name": "spare", 00:35:30.893 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": null, 00:35:30.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.893 "is_configured": false, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev3", 00:35:30.893 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev4", 00:35:30.893 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 } 00:35:30.893 ] 00:35:30.893 }' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:30.893 "name": "raid_bdev1", 00:35:30.893 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:30.893 "strip_size_kb": 0, 00:35:30.893 "state": "online", 00:35:30.893 "raid_level": "raid1", 00:35:30.893 "superblock": false, 00:35:30.893 "num_base_bdevs": 4, 00:35:30.893 "num_base_bdevs_discovered": 3, 00:35:30.893 "num_base_bdevs_operational": 3, 00:35:30.893 "base_bdevs_list": [ 00:35:30.893 { 00:35:30.893 "name": "spare", 00:35:30.893 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": null, 00:35:30.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.893 "is_configured": false, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev3", 00:35:30.893 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev4", 00:35:30.893 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 } 00:35:30.893 ] 00:35:30.893 }' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:30.893 "name": "raid_bdev1", 00:35:30.893 "uuid": "aa281525-a0ce-4232-8ff6-49382dad8000", 00:35:30.893 "strip_size_kb": 0, 00:35:30.893 "state": "online", 00:35:30.893 "raid_level": "raid1", 00:35:30.893 "superblock": false, 00:35:30.893 "num_base_bdevs": 4, 00:35:30.893 "num_base_bdevs_discovered": 3, 00:35:30.893 "num_base_bdevs_operational": 3, 00:35:30.893 "base_bdevs_list": [ 00:35:30.893 { 00:35:30.893 "name": "spare", 00:35:30.893 "uuid": "c2d03440-9f64-543f-98a1-bd70d14c38b8", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": null, 00:35:30.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.893 "is_configured": false, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev3", 00:35:30.893 "uuid": "8186180b-73ae-583e-a37f-1262596dcf63", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 }, 00:35:30.893 { 00:35:30.893 "name": "BaseBdev4", 00:35:30.893 "uuid": "11d3dd3b-6848-5967-b64d-6f1faa679493", 00:35:30.893 "is_configured": true, 00:35:30.893 "data_offset": 0, 00:35:30.893 "data_size": 65536 00:35:30.893 } 00:35:30.893 ] 00:35:30.893 }' 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:30.893 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:31.461 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:31.461 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.461 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:31.462 90.62 IOPS, 271.88 MiB/s [2024-10-09T14:04:38.013Z] [2024-10-09 14:04:37.785812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:31.462 [2024-10-09 14:04:37.785845] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:31.462 00:35:31.462 Latency(us) 00:35:31.462 [2024-10-09T14:04:38.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.462 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:35:31.462 raid_bdev1 : 8.12 89.73 269.19 0.00 0.00 15208.95 275.02 113346.07 00:35:31.462 [2024-10-09T14:04:38.013Z] =================================================================================================================== 00:35:31.462 [2024-10-09T14:04:38.013Z] Total : 89.73 269.19 0.00 0.00 15208.95 275.02 113346.07 00:35:31.462 [2024-10-09 14:04:37.889113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:31.462 [2024-10-09 14:04:37.889157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:31.462 [2024-10-09 14:04:37.889258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:31.462 [2024-10-09 14:04:37.889271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:35:31.462 { 00:35:31.462 "results": [ 00:35:31.462 { 00:35:31.462 "job": "raid_bdev1", 00:35:31.462 "core_mask": "0x1", 00:35:31.462 "workload": "randrw", 00:35:31.462 "percentage": 50, 00:35:31.462 "status": "finished", 00:35:31.462 "queue_depth": 2, 00:35:31.462 "io_size": 3145728, 00:35:31.462 "runtime": 8.124382, 00:35:31.462 "iops": 89.72990191746277, 00:35:31.462 "mibps": 269.1897057523883, 00:35:31.462 "io_failed": 0, 00:35:31.462 "io_timeout": 0, 00:35:31.462 "avg_latency_us": 15208.95314390228, 00:35:31.462 "min_latency_us": 275.01714285714286, 00:35:31.462 "max_latency_us": 113346.07238095238 00:35:31.462 } 00:35:31.462 ], 00:35:31.462 "core_count": 1 00:35:31.462 } 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:31.462 14:04:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:35:31.721 /dev/nbd0 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:31.721 1+0 records in 00:35:31.721 1+0 records out 00:35:31.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315623 s, 13.0 MB/s 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:31.721 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:35:31.982 /dev/nbd1 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:31.982 1+0 records in 00:35:31.982 1+0 records out 00:35:31.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375375 s, 10.9 MB/s 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:31.982 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:32.241 14:04:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:35:32.809 /dev/nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:32.809 1+0 records in 00:35:32.809 1+0 records out 00:35:32.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375822 s, 10.9 MB/s 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:32.809 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89759 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89759 ']' 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89759 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:33.068 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89759 00:35:33.327 killing process with pid 89759 00:35:33.327 Received shutdown signal, test time was about 9.871951 seconds 00:35:33.327 00:35:33.327 Latency(us) 00:35:33.327 [2024-10-09T14:04:39.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.327 [2024-10-09T14:04:39.878Z] =================================================================================================================== 00:35:33.327 [2024-10-09T14:04:39.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.327 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:33.327 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:33.327 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89759' 00:35:33.327 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89759 00:35:33.327 [2024-10-09 14:04:39.633311] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:33.327 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89759 00:35:33.327 [2024-10-09 14:04:39.681362] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:35:33.587 00:35:33.587 real 0m12.088s 00:35:33.587 user 0m15.604s 00:35:33.587 sys 0m2.014s 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:35:33.587 ************************************ 00:35:33.587 END TEST raid_rebuild_test_io 00:35:33.587 ************************************ 00:35:33.587 14:04:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:35:33.587 14:04:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:35:33.587 14:04:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:33.587 14:04:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:33.587 ************************************ 00:35:33.587 START TEST raid_rebuild_test_sb_io 00:35:33.587 ************************************ 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:33.587 14:04:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:35:33.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90157 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90157 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 90157 ']' 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:33.587 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:33.587 [2024-10-09 14:04:40.086955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:33.587 [2024-10-09 14:04:40.087291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:35:33.587 Zero copy mechanism will not be used. 00:35:33.587 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90157 ] 00:35:33.846 [2024-10-09 14:04:40.245178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.846 [2024-10-09 14:04:40.291443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.846 [2024-10-09 14:04:40.335612] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:33.846 [2024-10-09 14:04:40.335647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.783 BaseBdev1_malloc 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.783 14:04:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.783 [2024-10-09 14:04:41.004477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:34.783 [2024-10-09 14:04:41.004575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.783 [2024-10-09 14:04:41.004605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:34.783 [2024-10-09 14:04:41.004626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.783 [2024-10-09 14:04:41.007267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.783 [2024-10-09 14:04:41.007308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:34.784 BaseBdev1 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 BaseBdev2_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 [2024-10-09 14:04:41.048847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:34.784 [2024-10-09 14:04:41.048921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.784 [2024-10-09 14:04:41.048957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:34.784 [2024-10-09 14:04:41.048975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.784 [2024-10-09 14:04:41.052671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.784 [2024-10-09 14:04:41.052723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:34.784 BaseBdev2 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 BaseBdev3_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 [2024-10-09 14:04:41.078201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:34.784 [2024-10-09 14:04:41.078251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.784 [2024-10-09 14:04:41.078280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:34.784 [2024-10-09 14:04:41.078292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.784 [2024-10-09 14:04:41.080726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.784 [2024-10-09 14:04:41.080762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:34.784 BaseBdev3 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 BaseBdev4_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 [2024-10-09 14:04:41.107316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:34.784 [2024-10-09 14:04:41.107372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.784 [2024-10-09 14:04:41.107400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:34.784 [2024-10-09 14:04:41.107411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.784 [2024-10-09 14:04:41.109871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.784 [2024-10-09 14:04:41.110022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:34.784 BaseBdev4 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 spare_malloc 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 spare_delay 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 [2024-10-09 14:04:41.148546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:34.784 [2024-10-09 14:04:41.148607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:34.784 [2024-10-09 14:04:41.148648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:34.784 [2024-10-09 14:04:41.148659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:34.784 [2024-10-09 14:04:41.151113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:34.784 [2024-10-09 14:04:41.151149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:34.784 spare 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 [2024-10-09 14:04:41.160657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:34.784 [2024-10-09 14:04:41.162812] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:34.784 [2024-10-09 14:04:41.162883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:34.784 [2024-10-09 14:04:41.162925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:34.784 [2024-10-09 14:04:41.163090] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:35:34.784 [2024-10-09 14:04:41.163109] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:34.784 [2024-10-09 14:04:41.163361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:34.784 [2024-10-09 14:04:41.163510] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:35:34.784 [2024-10-09 14:04:41.163524] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:35:34.784 [2024-10-09 14:04:41.163659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.784 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:34.784 "name": "raid_bdev1", 00:35:34.784 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:34.784 "strip_size_kb": 0, 00:35:34.784 "state": "online", 00:35:34.784 "raid_level": "raid1", 00:35:34.784 "superblock": true, 00:35:34.784 "num_base_bdevs": 4, 00:35:34.784 "num_base_bdevs_discovered": 4, 00:35:34.784 "num_base_bdevs_operational": 4, 00:35:34.784 "base_bdevs_list": [ 00:35:34.784 { 00:35:34.784 "name": "BaseBdev1", 00:35:34.784 "uuid": "7bb97454-0fd2-52cd-a840-b0a3997c977d", 00:35:34.784 "is_configured": true, 00:35:34.784 "data_offset": 2048, 00:35:34.784 "data_size": 63488 00:35:34.784 }, 00:35:34.784 { 00:35:34.784 "name": "BaseBdev2", 00:35:34.784 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:34.784 "is_configured": true, 00:35:34.784 "data_offset": 2048, 00:35:34.784 "data_size": 63488 00:35:34.784 }, 00:35:34.784 { 00:35:34.784 "name": "BaseBdev3", 00:35:34.784 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:34.784 "is_configured": true, 00:35:34.784 "data_offset": 2048, 00:35:34.784 "data_size": 63488 00:35:34.784 }, 00:35:34.784 { 00:35:34.784 "name": "BaseBdev4", 00:35:34.784 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:34.784 "is_configured": true, 00:35:34.784 "data_offset": 2048, 00:35:34.784 "data_size": 63488 00:35:34.784 } 00:35:34.784 ] 00:35:34.784 }' 00:35:34.785 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:34.785 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.043 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:35.043 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:35.043 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.043 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.043 [2024-10-09 14:04:41.585043] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.303 [2024-10-09 14:04:41.692727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:35.303 "name": "raid_bdev1", 00:35:35.303 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:35.303 "strip_size_kb": 0, 00:35:35.303 "state": "online", 00:35:35.303 "raid_level": "raid1", 00:35:35.303 "superblock": true, 00:35:35.303 "num_base_bdevs": 4, 00:35:35.303 "num_base_bdevs_discovered": 3, 00:35:35.303 "num_base_bdevs_operational": 3, 00:35:35.303 "base_bdevs_list": [ 00:35:35.303 { 00:35:35.303 "name": null, 00:35:35.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:35.303 "is_configured": false, 00:35:35.303 "data_offset": 0, 00:35:35.303 "data_size": 63488 00:35:35.303 }, 00:35:35.303 { 00:35:35.303 "name": "BaseBdev2", 00:35:35.303 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:35.303 "is_configured": true, 00:35:35.303 "data_offset": 2048, 00:35:35.303 "data_size": 63488 00:35:35.303 }, 00:35:35.303 { 00:35:35.303 "name": "BaseBdev3", 00:35:35.303 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:35.303 "is_configured": true, 00:35:35.303 "data_offset": 2048, 00:35:35.303 "data_size": 63488 00:35:35.303 }, 00:35:35.303 { 00:35:35.303 "name": "BaseBdev4", 00:35:35.303 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:35.303 "is_configured": true, 00:35:35.303 "data_offset": 2048, 00:35:35.303 "data_size": 63488 00:35:35.303 } 00:35:35.303 ] 00:35:35.303 }' 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:35.303 14:04:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.303 [2024-10-09 14:04:41.846889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:35.303 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:35.303 Zero copy mechanism will not be used. 00:35:35.303 Running I/O for 60 seconds... 00:35:35.871 14:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:35.871 14:04:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:35.871 14:04:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:35.871 [2024-10-09 14:04:42.167009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:35.871 14:04:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:35.871 14:04:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:35.871 [2024-10-09 14:04:42.217589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:35.871 [2024-10-09 14:04:42.219967] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:35.871 [2024-10-09 14:04:42.322526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:35.871 [2024-10-09 14:04:42.322956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:36.130 [2024-10-09 14:04:42.437427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:36.130 [2024-10-09 14:04:42.438065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:36.389 [2024-10-09 14:04:42.762563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:36.389 159.00 IOPS, 477.00 MiB/s [2024-10-09T14:04:42.940Z] [2024-10-09 14:04:42.885941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:36.389 [2024-10-09 14:04:42.886241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:36.957 [2024-10-09 14:04:43.213866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:36.957 [2024-10-09 14:04:43.215054] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:36.957 "name": "raid_bdev1", 00:35:36.957 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:36.957 "strip_size_kb": 0, 00:35:36.957 "state": "online", 00:35:36.957 "raid_level": "raid1", 00:35:36.957 "superblock": true, 00:35:36.957 "num_base_bdevs": 4, 00:35:36.957 "num_base_bdevs_discovered": 4, 00:35:36.957 "num_base_bdevs_operational": 4, 00:35:36.957 "process": { 00:35:36.957 "type": "rebuild", 00:35:36.957 "target": "spare", 00:35:36.957 "progress": { 00:35:36.957 "blocks": 14336, 00:35:36.957 "percent": 22 00:35:36.957 } 00:35:36.957 }, 00:35:36.957 "base_bdevs_list": [ 00:35:36.957 { 00:35:36.957 "name": "spare", 00:35:36.957 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:36.957 "is_configured": true, 00:35:36.957 "data_offset": 2048, 00:35:36.957 "data_size": 63488 00:35:36.957 }, 00:35:36.957 { 00:35:36.957 "name": "BaseBdev2", 00:35:36.957 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:36.957 "is_configured": true, 00:35:36.957 "data_offset": 2048, 00:35:36.957 "data_size": 63488 00:35:36.957 }, 00:35:36.957 { 00:35:36.957 "name": "BaseBdev3", 00:35:36.957 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:36.957 "is_configured": true, 00:35:36.957 "data_offset": 2048, 00:35:36.957 "data_size": 63488 00:35:36.957 }, 00:35:36.957 { 00:35:36.957 "name": "BaseBdev4", 00:35:36.957 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:36.957 "is_configured": true, 00:35:36.957 "data_offset": 2048, 00:35:36.957 "data_size": 63488 00:35:36.957 } 00:35:36.957 ] 00:35:36.957 }' 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.957 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:36.957 [2024-10-09 14:04:43.347605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:36.957 [2024-10-09 14:04:43.425402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:36.957 [2024-10-09 14:04:43.426023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:35:37.216 [2024-10-09 14:04:43.540368] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:37.216 [2024-10-09 14:04:43.558435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:37.216 [2024-10-09 14:04:43.558490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:37.216 [2024-10-09 14:04:43.558509] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:37.216 [2024-10-09 14:04:43.577065] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:37.216 "name": "raid_bdev1", 00:35:37.216 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:37.216 "strip_size_kb": 0, 00:35:37.216 "state": "online", 00:35:37.216 "raid_level": "raid1", 00:35:37.216 "superblock": true, 00:35:37.216 "num_base_bdevs": 4, 00:35:37.216 "num_base_bdevs_discovered": 3, 00:35:37.216 "num_base_bdevs_operational": 3, 00:35:37.216 "base_bdevs_list": [ 00:35:37.216 { 00:35:37.216 "name": null, 00:35:37.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.216 "is_configured": false, 00:35:37.216 "data_offset": 0, 00:35:37.216 "data_size": 63488 00:35:37.216 }, 00:35:37.216 { 00:35:37.216 "name": "BaseBdev2", 00:35:37.216 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:37.216 "is_configured": true, 00:35:37.216 "data_offset": 2048, 00:35:37.216 "data_size": 63488 00:35:37.216 }, 00:35:37.216 { 00:35:37.216 "name": "BaseBdev3", 00:35:37.216 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:37.216 "is_configured": true, 00:35:37.216 "data_offset": 2048, 00:35:37.216 "data_size": 63488 00:35:37.216 }, 00:35:37.216 { 00:35:37.216 "name": "BaseBdev4", 00:35:37.216 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:37.216 "is_configured": true, 00:35:37.216 "data_offset": 2048, 00:35:37.216 "data_size": 63488 00:35:37.216 } 00:35:37.216 ] 00:35:37.216 }' 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:37.216 14:04:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 140.00 IOPS, 420.00 MiB/s [2024-10-09T14:04:44.285Z] 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:37.734 "name": "raid_bdev1", 00:35:37.734 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:37.734 "strip_size_kb": 0, 00:35:37.734 "state": "online", 00:35:37.734 "raid_level": "raid1", 00:35:37.734 "superblock": true, 00:35:37.734 "num_base_bdevs": 4, 00:35:37.734 "num_base_bdevs_discovered": 3, 00:35:37.734 "num_base_bdevs_operational": 3, 00:35:37.734 "base_bdevs_list": [ 00:35:37.734 { 00:35:37.734 "name": null, 00:35:37.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:37.734 "is_configured": false, 00:35:37.734 "data_offset": 0, 00:35:37.734 "data_size": 63488 00:35:37.734 }, 00:35:37.734 { 00:35:37.734 "name": "BaseBdev2", 00:35:37.734 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:37.734 "is_configured": true, 00:35:37.734 "data_offset": 2048, 00:35:37.734 "data_size": 63488 00:35:37.734 }, 00:35:37.734 { 00:35:37.734 "name": "BaseBdev3", 00:35:37.734 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:37.734 "is_configured": true, 00:35:37.734 "data_offset": 2048, 00:35:37.734 "data_size": 63488 00:35:37.734 }, 00:35:37.734 { 00:35:37.734 "name": "BaseBdev4", 00:35:37.734 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:37.734 "is_configured": true, 00:35:37.734 "data_offset": 2048, 00:35:37.734 "data_size": 63488 00:35:37.734 } 00:35:37.734 ] 00:35:37.734 }' 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:37.734 [2024-10-09 14:04:44.186678] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.734 14:04:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:37.734 [2024-10-09 14:04:44.244816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:35:37.734 [2024-10-09 14:04:44.247210] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:37.993 [2024-10-09 14:04:44.364740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:37.993 [2024-10-09 14:04:44.365910] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:35:38.252 [2024-10-09 14:04:44.582382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:38.253 [2024-10-09 14:04:44.583004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:35:38.512 141.33 IOPS, 424.00 MiB/s [2024-10-09T14:04:45.063Z] [2024-10-09 14:04:44.905202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:38.512 [2024-10-09 14:04:44.906532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:35:38.771 [2024-10-09 14:04:45.124085] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:38.771 [2024-10-09 14:04:45.124791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:38.771 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:38.771 "name": "raid_bdev1", 00:35:38.771 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:38.771 "strip_size_kb": 0, 00:35:38.771 "state": "online", 00:35:38.771 "raid_level": "raid1", 00:35:38.771 "superblock": true, 00:35:38.771 "num_base_bdevs": 4, 00:35:38.771 "num_base_bdevs_discovered": 4, 00:35:38.771 "num_base_bdevs_operational": 4, 00:35:38.772 "process": { 00:35:38.772 "type": "rebuild", 00:35:38.772 "target": "spare", 00:35:38.772 "progress": { 00:35:38.772 "blocks": 10240, 00:35:38.772 "percent": 16 00:35:38.772 } 00:35:38.772 }, 00:35:38.772 "base_bdevs_list": [ 00:35:38.772 { 00:35:38.772 "name": "spare", 00:35:38.772 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:38.772 "is_configured": true, 00:35:38.772 "data_offset": 2048, 00:35:38.772 "data_size": 63488 00:35:38.772 }, 00:35:38.772 { 00:35:38.772 "name": "BaseBdev2", 00:35:38.772 "uuid": "334817d2-3ec8-542e-9d1f-3dba07e03ef7", 00:35:38.772 "is_configured": true, 00:35:38.772 "data_offset": 2048, 00:35:38.772 "data_size": 63488 00:35:38.772 }, 00:35:38.772 { 00:35:38.772 "name": "BaseBdev3", 00:35:38.772 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:38.772 "is_configured": true, 00:35:38.772 "data_offset": 2048, 00:35:38.772 "data_size": 63488 00:35:38.772 }, 00:35:38.772 { 00:35:38.772 "name": "BaseBdev4", 00:35:38.772 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:38.772 "is_configured": true, 00:35:38.772 "data_offset": 2048, 00:35:38.772 "data_size": 63488 00:35:38.772 } 00:35:38.772 ] 00:35:38.772 }' 00:35:38.772 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:38.772 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:38.772 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:39.030 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:35:39.031 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.031 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:39.031 [2024-10-09 14:04:45.372834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:39.031 [2024-10-09 14:04:45.465010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:39.031 [2024-10-09 14:04:45.465490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:39.294 [2024-10-09 14:04:45.673260] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:35:39.294 [2024-10-09 14:04:45.673300] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:35:39.294 [2024-10-09 14:04:45.673363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:39.294 "name": "raid_bdev1", 00:35:39.294 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:39.294 "strip_size_kb": 0, 00:35:39.294 "state": "online", 00:35:39.294 "raid_level": "raid1", 00:35:39.294 "superblock": true, 00:35:39.294 "num_base_bdevs": 4, 00:35:39.294 "num_base_bdevs_discovered": 3, 00:35:39.294 "num_base_bdevs_operational": 3, 00:35:39.294 "process": { 00:35:39.294 "type": "rebuild", 00:35:39.294 "target": "spare", 00:35:39.294 "progress": { 00:35:39.294 "blocks": 14336, 00:35:39.294 "percent": 22 00:35:39.294 } 00:35:39.294 }, 00:35:39.294 "base_bdevs_list": [ 00:35:39.294 { 00:35:39.294 "name": "spare", 00:35:39.294 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:39.294 "is_configured": true, 00:35:39.294 "data_offset": 2048, 00:35:39.294 "data_size": 63488 00:35:39.294 }, 00:35:39.294 { 00:35:39.294 "name": null, 00:35:39.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.294 "is_configured": false, 00:35:39.294 "data_offset": 0, 00:35:39.294 "data_size": 63488 00:35:39.294 }, 00:35:39.294 { 00:35:39.294 "name": "BaseBdev3", 00:35:39.294 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:39.294 "is_configured": true, 00:35:39.294 "data_offset": 2048, 00:35:39.294 "data_size": 63488 00:35:39.294 }, 00:35:39.294 { 00:35:39.294 "name": "BaseBdev4", 00:35:39.294 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:39.294 "is_configured": true, 00:35:39.294 "data_offset": 2048, 00:35:39.294 "data_size": 63488 00:35:39.294 } 00:35:39.294 ] 00:35:39.294 }' 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:39.294 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.566 134.25 IOPS, 402.75 MiB/s [2024-10-09T14:04:46.117Z] 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:39.566 "name": "raid_bdev1", 00:35:39.566 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:39.566 "strip_size_kb": 0, 00:35:39.566 "state": "online", 00:35:39.566 "raid_level": "raid1", 00:35:39.566 "superblock": true, 00:35:39.566 "num_base_bdevs": 4, 00:35:39.566 "num_base_bdevs_discovered": 3, 00:35:39.566 "num_base_bdevs_operational": 3, 00:35:39.566 "process": { 00:35:39.566 "type": "rebuild", 00:35:39.566 "target": "spare", 00:35:39.566 "progress": { 00:35:39.566 "blocks": 16384, 00:35:39.566 "percent": 25 00:35:39.566 } 00:35:39.566 }, 00:35:39.566 "base_bdevs_list": [ 00:35:39.566 { 00:35:39.566 "name": "spare", 00:35:39.566 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:39.566 "is_configured": true, 00:35:39.566 "data_offset": 2048, 00:35:39.566 "data_size": 63488 00:35:39.566 }, 00:35:39.566 { 00:35:39.566 "name": null, 00:35:39.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.566 "is_configured": false, 00:35:39.566 "data_offset": 0, 00:35:39.566 "data_size": 63488 00:35:39.566 }, 00:35:39.566 { 00:35:39.566 "name": "BaseBdev3", 00:35:39.566 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:39.566 "is_configured": true, 00:35:39.566 "data_offset": 2048, 00:35:39.566 "data_size": 63488 00:35:39.566 }, 00:35:39.566 { 00:35:39.566 "name": "BaseBdev4", 00:35:39.566 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:39.566 "is_configured": true, 00:35:39.566 "data_offset": 2048, 00:35:39.566 "data_size": 63488 00:35:39.566 } 00:35:39.566 ] 00:35:39.566 }' 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:39.566 14:04:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:39.825 [2024-10-09 14:04:46.347342] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:35:39.825 [2024-10-09 14:04:46.347867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:35:40.084 [2024-10-09 14:04:46.464027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:35:40.600 121.20 IOPS, 363.60 MiB/s [2024-10-09T14:04:47.151Z] 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.600 14:04:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:40.600 "name": "raid_bdev1", 00:35:40.600 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:40.600 "strip_size_kb": 0, 00:35:40.600 "state": "online", 00:35:40.600 "raid_level": "raid1", 00:35:40.600 "superblock": true, 00:35:40.600 "num_base_bdevs": 4, 00:35:40.600 "num_base_bdevs_discovered": 3, 00:35:40.600 "num_base_bdevs_operational": 3, 00:35:40.600 "process": { 00:35:40.600 "type": "rebuild", 00:35:40.600 "target": "spare", 00:35:40.600 "progress": { 00:35:40.600 "blocks": 36864, 00:35:40.600 "percent": 58 00:35:40.600 } 00:35:40.600 }, 00:35:40.600 "base_bdevs_list": [ 00:35:40.600 { 00:35:40.600 "name": "spare", 00:35:40.600 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:40.600 "is_configured": true, 00:35:40.600 "data_offset": 2048, 00:35:40.600 "data_size": 63488 00:35:40.600 }, 00:35:40.600 { 00:35:40.600 "name": null, 00:35:40.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:40.600 "is_configured": false, 00:35:40.600 "data_offset": 0, 00:35:40.600 "data_size": 63488 00:35:40.600 }, 00:35:40.600 { 00:35:40.600 "name": "BaseBdev3", 00:35:40.600 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:40.600 "is_configured": true, 00:35:40.600 "data_offset": 2048, 00:35:40.600 "data_size": 63488 00:35:40.600 }, 00:35:40.600 { 00:35:40.600 "name": "BaseBdev4", 00:35:40.600 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:40.600 "is_configured": true, 00:35:40.600 "data_offset": 2048, 00:35:40.600 "data_size": 63488 00:35:40.600 } 00:35:40.600 ] 00:35:40.600 }' 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:40.600 14:04:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:40.859 [2024-10-09 14:04:47.192320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:35:41.427 [2024-10-09 14:04:47.849306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:35:41.695 109.83 IOPS, 329.50 MiB/s [2024-10-09T14:04:48.246Z] 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:41.695 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.695 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:41.695 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.696 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:41.696 "name": "raid_bdev1", 00:35:41.696 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:41.696 "strip_size_kb": 0, 00:35:41.696 "state": "online", 00:35:41.696 "raid_level": "raid1", 00:35:41.696 "superblock": true, 00:35:41.696 "num_base_bdevs": 4, 00:35:41.696 "num_base_bdevs_discovered": 3, 00:35:41.696 "num_base_bdevs_operational": 3, 00:35:41.696 "process": { 00:35:41.696 "type": "rebuild", 00:35:41.696 "target": "spare", 00:35:41.696 "progress": { 00:35:41.696 "blocks": 57344, 00:35:41.696 "percent": 90 00:35:41.696 } 00:35:41.696 }, 00:35:41.696 "base_bdevs_list": [ 00:35:41.696 { 00:35:41.696 "name": "spare", 00:35:41.696 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:41.696 "is_configured": true, 00:35:41.696 "data_offset": 2048, 00:35:41.696 "data_size": 63488 00:35:41.696 }, 00:35:41.696 { 00:35:41.696 "name": null, 00:35:41.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:41.696 "is_configured": false, 00:35:41.696 "data_offset": 0, 00:35:41.696 "data_size": 63488 00:35:41.696 }, 00:35:41.696 { 00:35:41.696 "name": "BaseBdev3", 00:35:41.696 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:41.696 "is_configured": true, 00:35:41.696 "data_offset": 2048, 00:35:41.697 "data_size": 63488 00:35:41.697 }, 00:35:41.697 { 00:35:41.697 "name": "BaseBdev4", 00:35:41.697 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:41.697 "is_configured": true, 00:35:41.697 "data_offset": 2048, 00:35:41.697 "data_size": 63488 00:35:41.697 } 00:35:41.697 ] 00:35:41.697 }' 00:35:41.697 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:41.697 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.697 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:41.697 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.697 14:04:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:41.960 [2024-10-09 14:04:48.393812] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:41.960 [2024-10-09 14:04:48.499422] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:41.960 [2024-10-09 14:04:48.502765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:42.785 98.14 IOPS, 294.43 MiB/s [2024-10-09T14:04:49.336Z] 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:42.785 "name": "raid_bdev1", 00:35:42.785 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:42.785 "strip_size_kb": 0, 00:35:42.785 "state": "online", 00:35:42.785 "raid_level": "raid1", 00:35:42.785 "superblock": true, 00:35:42.785 "num_base_bdevs": 4, 00:35:42.785 "num_base_bdevs_discovered": 3, 00:35:42.785 "num_base_bdevs_operational": 3, 00:35:42.785 "base_bdevs_list": [ 00:35:42.785 { 00:35:42.785 "name": "spare", 00:35:42.785 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:42.785 "is_configured": true, 00:35:42.785 "data_offset": 2048, 00:35:42.785 "data_size": 63488 00:35:42.785 }, 00:35:42.785 { 00:35:42.785 "name": null, 00:35:42.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.785 "is_configured": false, 00:35:42.785 "data_offset": 0, 00:35:42.785 "data_size": 63488 00:35:42.785 }, 00:35:42.785 { 00:35:42.785 "name": "BaseBdev3", 00:35:42.785 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:42.785 "is_configured": true, 00:35:42.785 "data_offset": 2048, 00:35:42.785 "data_size": 63488 00:35:42.785 }, 00:35:42.785 { 00:35:42.785 "name": "BaseBdev4", 00:35:42.785 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:42.785 "is_configured": true, 00:35:42.785 "data_offset": 2048, 00:35:42.785 "data_size": 63488 00:35:42.785 } 00:35:42.785 ] 00:35:42.785 }' 00:35:42.785 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:43.043 "name": "raid_bdev1", 00:35:43.043 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:43.043 "strip_size_kb": 0, 00:35:43.043 "state": "online", 00:35:43.043 "raid_level": "raid1", 00:35:43.043 "superblock": true, 00:35:43.043 "num_base_bdevs": 4, 00:35:43.043 "num_base_bdevs_discovered": 3, 00:35:43.043 "num_base_bdevs_operational": 3, 00:35:43.043 "base_bdevs_list": [ 00:35:43.043 { 00:35:43.043 "name": "spare", 00:35:43.043 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:43.043 "is_configured": true, 00:35:43.043 "data_offset": 2048, 00:35:43.043 "data_size": 63488 00:35:43.043 }, 00:35:43.043 { 00:35:43.043 "name": null, 00:35:43.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:43.043 "is_configured": false, 00:35:43.043 "data_offset": 0, 00:35:43.043 "data_size": 63488 00:35:43.043 }, 00:35:43.043 { 00:35:43.043 "name": "BaseBdev3", 00:35:43.043 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:43.043 "is_configured": true, 00:35:43.043 "data_offset": 2048, 00:35:43.043 "data_size": 63488 00:35:43.043 }, 00:35:43.043 { 00:35:43.043 "name": "BaseBdev4", 00:35:43.043 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:43.043 "is_configured": true, 00:35:43.043 "data_offset": 2048, 00:35:43.043 "data_size": 63488 00:35:43.043 } 00:35:43.043 ] 00:35:43.043 }' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.043 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.044 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:43.044 "name": "raid_bdev1", 00:35:43.044 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:43.044 "strip_size_kb": 0, 00:35:43.044 "state": "online", 00:35:43.044 "raid_level": "raid1", 00:35:43.044 "superblock": true, 00:35:43.044 "num_base_bdevs": 4, 00:35:43.044 "num_base_bdevs_discovered": 3, 00:35:43.044 "num_base_bdevs_operational": 3, 00:35:43.044 "base_bdevs_list": [ 00:35:43.044 { 00:35:43.044 "name": "spare", 00:35:43.044 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:43.044 "is_configured": true, 00:35:43.044 "data_offset": 2048, 00:35:43.044 "data_size": 63488 00:35:43.044 }, 00:35:43.044 { 00:35:43.044 "name": null, 00:35:43.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:43.044 "is_configured": false, 00:35:43.044 "data_offset": 0, 00:35:43.044 "data_size": 63488 00:35:43.044 }, 00:35:43.044 { 00:35:43.044 "name": "BaseBdev3", 00:35:43.044 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:43.044 "is_configured": true, 00:35:43.044 "data_offset": 2048, 00:35:43.044 "data_size": 63488 00:35:43.044 }, 00:35:43.044 { 00:35:43.044 "name": "BaseBdev4", 00:35:43.044 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:43.044 "is_configured": true, 00:35:43.044 "data_offset": 2048, 00:35:43.044 "data_size": 63488 00:35:43.044 } 00:35:43.044 ] 00:35:43.044 }' 00:35:43.044 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:43.044 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.611 90.38 IOPS, 271.12 MiB/s [2024-10-09T14:04:50.162Z] 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:43.611 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.611 14:04:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.611 [2024-10-09 14:04:49.999659] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:43.611 [2024-10-09 14:04:49.999714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:43.611 00:35:43.611 Latency(us) 00:35:43.611 [2024-10-09T14:04:50.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.611 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:35:43.611 raid_bdev1 : 8.17 88.94 266.82 0.00 0.00 15013.02 300.37 115842.68 00:35:43.611 [2024-10-09T14:04:50.162Z] =================================================================================================================== 00:35:43.611 [2024-10-09T14:04:50.162Z] Total : 88.94 266.82 0.00 0.00 15013.02 300.37 115842.68 00:35:43.611 [2024-10-09 14:04:50.027760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:43.611 [2024-10-09 14:04:50.027814] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:43.611 [2024-10-09 14:04:50.027923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:43.611 [2024-10-09 14:04:50.027939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:35:43.611 { 00:35:43.611 "results": [ 00:35:43.611 { 00:35:43.611 "job": "raid_bdev1", 00:35:43.611 "core_mask": "0x1", 00:35:43.611 "workload": "randrw", 00:35:43.611 "percentage": 50, 00:35:43.611 "status": "finished", 00:35:43.611 "queue_depth": 2, 00:35:43.611 "io_size": 3145728, 00:35:43.611 "runtime": 8.173992, 00:35:43.611 "iops": 88.94063023306116, 00:35:43.611 "mibps": 266.8218906991835, 00:35:43.611 "io_failed": 0, 00:35:43.611 "io_timeout": 0, 00:35:43.611 "avg_latency_us": 15013.024505141808, 00:35:43.611 "min_latency_us": 300.37333333333333, 00:35:43.611 "max_latency_us": 115842.6819047619 00:35:43.611 } 00:35:43.611 ], 00:35:43.611 "core_count": 1 00:35:43.611 } 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:43.611 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:35:43.869 /dev/nbd0 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:43.869 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:44.128 1+0 records in 00:35:44.128 1+0 records out 00:35:44.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292689 s, 14.0 MB/s 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:44.128 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:35:44.387 /dev/nbd1 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:44.387 1+0 records in 00:35:44.387 1+0 records out 00:35:44.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362235 s, 11.3 MB/s 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:44.387 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:44.388 14:04:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:44.647 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:35:44.906 /dev/nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.165 1+0 records in 00:35:45.165 1+0 records out 00:35:45.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322316 s, 12.7 MB/s 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:45.165 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:45.424 14:04:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:45.683 [2024-10-09 14:04:52.138119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:45.683 [2024-10-09 14:04:52.138179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:45.683 [2024-10-09 14:04:52.138204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:35:45.683 [2024-10-09 14:04:52.138220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:45.683 [2024-10-09 14:04:52.140983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:45.683 [2024-10-09 14:04:52.141030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:45.683 [2024-10-09 14:04:52.141122] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:45.683 [2024-10-09 14:04:52.141176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:45.683 [2024-10-09 14:04:52.141287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:45.683 [2024-10-09 14:04:52.141417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:45.683 spare 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.683 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:45.942 [2024-10-09 14:04:52.241515] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:35:45.942 [2024-10-09 14:04:52.241566] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:35:45.942 [2024-10-09 14:04:52.241948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:35:45.942 [2024-10-09 14:04:52.242143] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:35:45.942 [2024-10-09 14:04:52.242165] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:35:45.942 [2024-10-09 14:04:52.242333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:45.942 "name": "raid_bdev1", 00:35:45.942 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:45.942 "strip_size_kb": 0, 00:35:45.942 "state": "online", 00:35:45.942 "raid_level": "raid1", 00:35:45.942 "superblock": true, 00:35:45.942 "num_base_bdevs": 4, 00:35:45.942 "num_base_bdevs_discovered": 3, 00:35:45.942 "num_base_bdevs_operational": 3, 00:35:45.942 "base_bdevs_list": [ 00:35:45.942 { 00:35:45.942 "name": "spare", 00:35:45.942 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:45.942 "is_configured": true, 00:35:45.942 "data_offset": 2048, 00:35:45.942 "data_size": 63488 00:35:45.942 }, 00:35:45.942 { 00:35:45.942 "name": null, 00:35:45.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.942 "is_configured": false, 00:35:45.942 "data_offset": 2048, 00:35:45.942 "data_size": 63488 00:35:45.942 }, 00:35:45.942 { 00:35:45.942 "name": "BaseBdev3", 00:35:45.942 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:45.942 "is_configured": true, 00:35:45.942 "data_offset": 2048, 00:35:45.942 "data_size": 63488 00:35:45.942 }, 00:35:45.942 { 00:35:45.942 "name": "BaseBdev4", 00:35:45.942 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:45.942 "is_configured": true, 00:35:45.942 "data_offset": 2048, 00:35:45.942 "data_size": 63488 00:35:45.942 } 00:35:45.942 ] 00:35:45.942 }' 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:45.942 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:46.202 "name": "raid_bdev1", 00:35:46.202 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:46.202 "strip_size_kb": 0, 00:35:46.202 "state": "online", 00:35:46.202 "raid_level": "raid1", 00:35:46.202 "superblock": true, 00:35:46.202 "num_base_bdevs": 4, 00:35:46.202 "num_base_bdevs_discovered": 3, 00:35:46.202 "num_base_bdevs_operational": 3, 00:35:46.202 "base_bdevs_list": [ 00:35:46.202 { 00:35:46.202 "name": "spare", 00:35:46.202 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:46.202 "is_configured": true, 00:35:46.202 "data_offset": 2048, 00:35:46.202 "data_size": 63488 00:35:46.202 }, 00:35:46.202 { 00:35:46.202 "name": null, 00:35:46.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.202 "is_configured": false, 00:35:46.202 "data_offset": 2048, 00:35:46.202 "data_size": 63488 00:35:46.202 }, 00:35:46.202 { 00:35:46.202 "name": "BaseBdev3", 00:35:46.202 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:46.202 "is_configured": true, 00:35:46.202 "data_offset": 2048, 00:35:46.202 "data_size": 63488 00:35:46.202 }, 00:35:46.202 { 00:35:46.202 "name": "BaseBdev4", 00:35:46.202 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:46.202 "is_configured": true, 00:35:46.202 "data_offset": 2048, 00:35:46.202 "data_size": 63488 00:35:46.202 } 00:35:46.202 ] 00:35:46.202 }' 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:46.202 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.461 [2024-10-09 14:04:52.830596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:46.461 "name": "raid_bdev1", 00:35:46.461 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:46.461 "strip_size_kb": 0, 00:35:46.461 "state": "online", 00:35:46.461 "raid_level": "raid1", 00:35:46.461 "superblock": true, 00:35:46.461 "num_base_bdevs": 4, 00:35:46.461 "num_base_bdevs_discovered": 2, 00:35:46.461 "num_base_bdevs_operational": 2, 00:35:46.461 "base_bdevs_list": [ 00:35:46.461 { 00:35:46.461 "name": null, 00:35:46.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.461 "is_configured": false, 00:35:46.461 "data_offset": 0, 00:35:46.461 "data_size": 63488 00:35:46.461 }, 00:35:46.461 { 00:35:46.461 "name": null, 00:35:46.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:46.461 "is_configured": false, 00:35:46.461 "data_offset": 2048, 00:35:46.461 "data_size": 63488 00:35:46.461 }, 00:35:46.461 { 00:35:46.461 "name": "BaseBdev3", 00:35:46.461 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:46.461 "is_configured": true, 00:35:46.461 "data_offset": 2048, 00:35:46.461 "data_size": 63488 00:35:46.461 }, 00:35:46.461 { 00:35:46.461 "name": "BaseBdev4", 00:35:46.461 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:46.461 "is_configured": true, 00:35:46.461 "data_offset": 2048, 00:35:46.461 "data_size": 63488 00:35:46.461 } 00:35:46.461 ] 00:35:46.461 }' 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:46.461 14:04:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.720 14:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:46.720 14:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:46.720 14:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:46.720 [2024-10-09 14:04:53.190740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:46.721 [2024-10-09 14:04:53.190957] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:46.721 [2024-10-09 14:04:53.190981] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:46.721 [2024-10-09 14:04:53.191021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:46.721 [2024-10-09 14:04:53.194916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:35:46.721 14:04:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:46.721 14:04:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:35:46.721 [2024-10-09 14:04:53.197461] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:47.657 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:47.928 "name": "raid_bdev1", 00:35:47.928 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:47.928 "strip_size_kb": 0, 00:35:47.928 "state": "online", 00:35:47.928 "raid_level": "raid1", 00:35:47.928 "superblock": true, 00:35:47.928 "num_base_bdevs": 4, 00:35:47.928 "num_base_bdevs_discovered": 3, 00:35:47.928 "num_base_bdevs_operational": 3, 00:35:47.928 "process": { 00:35:47.928 "type": "rebuild", 00:35:47.928 "target": "spare", 00:35:47.928 "progress": { 00:35:47.928 "blocks": 20480, 00:35:47.928 "percent": 32 00:35:47.928 } 00:35:47.928 }, 00:35:47.928 "base_bdevs_list": [ 00:35:47.928 { 00:35:47.928 "name": "spare", 00:35:47.928 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:47.928 "is_configured": true, 00:35:47.928 "data_offset": 2048, 00:35:47.928 "data_size": 63488 00:35:47.928 }, 00:35:47.928 { 00:35:47.928 "name": null, 00:35:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.928 "is_configured": false, 00:35:47.928 "data_offset": 2048, 00:35:47.928 "data_size": 63488 00:35:47.928 }, 00:35:47.928 { 00:35:47.928 "name": "BaseBdev3", 00:35:47.928 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:47.928 "is_configured": true, 00:35:47.928 "data_offset": 2048, 00:35:47.928 "data_size": 63488 00:35:47.928 }, 00:35:47.928 { 00:35:47.928 "name": "BaseBdev4", 00:35:47.928 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:47.928 "is_configured": true, 00:35:47.928 "data_offset": 2048, 00:35:47.928 "data_size": 63488 00:35:47.928 } 00:35:47.928 ] 00:35:47.928 }' 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.928 [2024-10-09 14:04:54.344365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:47.928 [2024-10-09 14:04:54.404310] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:47.928 [2024-10-09 14:04:54.404375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:47.928 [2024-10-09 14:04:54.404394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:47.928 [2024-10-09 14:04:54.404407] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:47.928 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:47.928 "name": "raid_bdev1", 00:35:47.928 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:47.928 "strip_size_kb": 0, 00:35:47.928 "state": "online", 00:35:47.928 "raid_level": "raid1", 00:35:47.928 "superblock": true, 00:35:47.928 "num_base_bdevs": 4, 00:35:47.928 "num_base_bdevs_discovered": 2, 00:35:47.928 "num_base_bdevs_operational": 2, 00:35:47.928 "base_bdevs_list": [ 00:35:47.928 { 00:35:47.928 "name": null, 00:35:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.928 "is_configured": false, 00:35:47.928 "data_offset": 0, 00:35:47.928 "data_size": 63488 00:35:47.928 }, 00:35:47.928 { 00:35:47.928 "name": null, 00:35:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.928 "is_configured": false, 00:35:47.928 "data_offset": 2048, 00:35:47.928 "data_size": 63488 00:35:47.928 }, 00:35:47.928 { 00:35:47.928 "name": "BaseBdev3", 00:35:47.928 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:47.928 "is_configured": true, 00:35:47.929 "data_offset": 2048, 00:35:47.929 "data_size": 63488 00:35:47.929 }, 00:35:47.929 { 00:35:47.929 "name": "BaseBdev4", 00:35:47.929 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:47.929 "is_configured": true, 00:35:47.929 "data_offset": 2048, 00:35:47.929 "data_size": 63488 00:35:47.929 } 00:35:47.929 ] 00:35:47.929 }' 00:35:47.929 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:47.929 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.523 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:48.523 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:48.523 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:48.523 [2024-10-09 14:04:54.860753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:48.523 [2024-10-09 14:04:54.860824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:48.523 [2024-10-09 14:04:54.860858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:35:48.523 [2024-10-09 14:04:54.860875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:48.523 [2024-10-09 14:04:54.861368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:48.523 [2024-10-09 14:04:54.861394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:48.523 [2024-10-09 14:04:54.861490] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:48.523 [2024-10-09 14:04:54.861509] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:35:48.523 [2024-10-09 14:04:54.861523] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:48.523 [2024-10-09 14:04:54.861572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:48.523 [2024-10-09 14:04:54.865497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:35:48.523 spare 00:35:48.523 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:48.523 14:04:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:35:48.523 [2024-10-09 14:04:54.868066] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:49.458 "name": "raid_bdev1", 00:35:49.458 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:49.458 "strip_size_kb": 0, 00:35:49.458 "state": "online", 00:35:49.458 "raid_level": "raid1", 00:35:49.458 "superblock": true, 00:35:49.458 "num_base_bdevs": 4, 00:35:49.458 "num_base_bdevs_discovered": 3, 00:35:49.458 "num_base_bdevs_operational": 3, 00:35:49.458 "process": { 00:35:49.458 "type": "rebuild", 00:35:49.458 "target": "spare", 00:35:49.458 "progress": { 00:35:49.458 "blocks": 20480, 00:35:49.458 "percent": 32 00:35:49.458 } 00:35:49.458 }, 00:35:49.458 "base_bdevs_list": [ 00:35:49.458 { 00:35:49.458 "name": "spare", 00:35:49.458 "uuid": "51225a31-ce40-5fbd-949e-187e5800d01a", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": null, 00:35:49.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.458 "is_configured": false, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": "BaseBdev3", 00:35:49.458 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 }, 00:35:49.458 { 00:35:49.458 "name": "BaseBdev4", 00:35:49.458 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:49.458 "is_configured": true, 00:35:49.458 "data_offset": 2048, 00:35:49.458 "data_size": 63488 00:35:49.458 } 00:35:49.458 ] 00:35:49.458 }' 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:49.458 14:04:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:49.458 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:49.458 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:35:49.458 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.458 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:49.716 [2024-10-09 14:04:56.010536] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:49.716 [2024-10-09 14:04:56.075012] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:49.716 [2024-10-09 14:04:56.075077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:49.716 [2024-10-09 14:04:56.075099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:49.716 [2024-10-09 14:04:56.075109] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:49.716 "name": "raid_bdev1", 00:35:49.716 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:49.716 "strip_size_kb": 0, 00:35:49.716 "state": "online", 00:35:49.716 "raid_level": "raid1", 00:35:49.716 "superblock": true, 00:35:49.716 "num_base_bdevs": 4, 00:35:49.716 "num_base_bdevs_discovered": 2, 00:35:49.716 "num_base_bdevs_operational": 2, 00:35:49.716 "base_bdevs_list": [ 00:35:49.716 { 00:35:49.716 "name": null, 00:35:49.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.716 "is_configured": false, 00:35:49.716 "data_offset": 0, 00:35:49.716 "data_size": 63488 00:35:49.716 }, 00:35:49.716 { 00:35:49.716 "name": null, 00:35:49.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.716 "is_configured": false, 00:35:49.716 "data_offset": 2048, 00:35:49.716 "data_size": 63488 00:35:49.716 }, 00:35:49.716 { 00:35:49.716 "name": "BaseBdev3", 00:35:49.716 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:49.716 "is_configured": true, 00:35:49.716 "data_offset": 2048, 00:35:49.716 "data_size": 63488 00:35:49.716 }, 00:35:49.716 { 00:35:49.716 "name": "BaseBdev4", 00:35:49.716 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:49.716 "is_configured": true, 00:35:49.716 "data_offset": 2048, 00:35:49.716 "data_size": 63488 00:35:49.716 } 00:35:49.716 ] 00:35:49.716 }' 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:49.716 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:50.282 "name": "raid_bdev1", 00:35:50.282 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:50.282 "strip_size_kb": 0, 00:35:50.282 "state": "online", 00:35:50.282 "raid_level": "raid1", 00:35:50.282 "superblock": true, 00:35:50.282 "num_base_bdevs": 4, 00:35:50.282 "num_base_bdevs_discovered": 2, 00:35:50.282 "num_base_bdevs_operational": 2, 00:35:50.282 "base_bdevs_list": [ 00:35:50.282 { 00:35:50.282 "name": null, 00:35:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.282 "is_configured": false, 00:35:50.282 "data_offset": 0, 00:35:50.282 "data_size": 63488 00:35:50.282 }, 00:35:50.282 { 00:35:50.282 "name": null, 00:35:50.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:50.282 "is_configured": false, 00:35:50.282 "data_offset": 2048, 00:35:50.282 "data_size": 63488 00:35:50.282 }, 00:35:50.282 { 00:35:50.282 "name": "BaseBdev3", 00:35:50.282 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:50.282 "is_configured": true, 00:35:50.282 "data_offset": 2048, 00:35:50.282 "data_size": 63488 00:35:50.282 }, 00:35:50.282 { 00:35:50.282 "name": "BaseBdev4", 00:35:50.282 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:50.282 "is_configured": true, 00:35:50.282 "data_offset": 2048, 00:35:50.282 "data_size": 63488 00:35:50.282 } 00:35:50.282 ] 00:35:50.282 }' 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:50.282 [2024-10-09 14:04:56.699668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:50.282 [2024-10-09 14:04:56.699726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:50.282 [2024-10-09 14:04:56.699753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:35:50.282 [2024-10-09 14:04:56.699766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:50.282 [2024-10-09 14:04:56.700247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:50.282 [2024-10-09 14:04:56.700275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:50.282 [2024-10-09 14:04:56.700358] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:50.282 [2024-10-09 14:04:56.700375] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:50.282 [2024-10-09 14:04:56.700393] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:50.282 [2024-10-09 14:04:56.700409] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:35:50.282 BaseBdev1 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.282 14:04:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.216 "name": "raid_bdev1", 00:35:51.216 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:51.216 "strip_size_kb": 0, 00:35:51.216 "state": "online", 00:35:51.216 "raid_level": "raid1", 00:35:51.216 "superblock": true, 00:35:51.216 "num_base_bdevs": 4, 00:35:51.216 "num_base_bdevs_discovered": 2, 00:35:51.216 "num_base_bdevs_operational": 2, 00:35:51.216 "base_bdevs_list": [ 00:35:51.216 { 00:35:51.216 "name": null, 00:35:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.216 "is_configured": false, 00:35:51.216 "data_offset": 0, 00:35:51.216 "data_size": 63488 00:35:51.216 }, 00:35:51.216 { 00:35:51.216 "name": null, 00:35:51.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.216 "is_configured": false, 00:35:51.216 "data_offset": 2048, 00:35:51.216 "data_size": 63488 00:35:51.216 }, 00:35:51.216 { 00:35:51.216 "name": "BaseBdev3", 00:35:51.216 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:51.216 "is_configured": true, 00:35:51.216 "data_offset": 2048, 00:35:51.216 "data_size": 63488 00:35:51.216 }, 00:35:51.216 { 00:35:51.216 "name": "BaseBdev4", 00:35:51.216 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:51.216 "is_configured": true, 00:35:51.216 "data_offset": 2048, 00:35:51.216 "data_size": 63488 00:35:51.216 } 00:35:51.216 ] 00:35:51.216 }' 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.216 14:04:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.783 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:51.783 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:51.784 "name": "raid_bdev1", 00:35:51.784 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:51.784 "strip_size_kb": 0, 00:35:51.784 "state": "online", 00:35:51.784 "raid_level": "raid1", 00:35:51.784 "superblock": true, 00:35:51.784 "num_base_bdevs": 4, 00:35:51.784 "num_base_bdevs_discovered": 2, 00:35:51.784 "num_base_bdevs_operational": 2, 00:35:51.784 "base_bdevs_list": [ 00:35:51.784 { 00:35:51.784 "name": null, 00:35:51.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.784 "is_configured": false, 00:35:51.784 "data_offset": 0, 00:35:51.784 "data_size": 63488 00:35:51.784 }, 00:35:51.784 { 00:35:51.784 "name": null, 00:35:51.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.784 "is_configured": false, 00:35:51.784 "data_offset": 2048, 00:35:51.784 "data_size": 63488 00:35:51.784 }, 00:35:51.784 { 00:35:51.784 "name": "BaseBdev3", 00:35:51.784 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:51.784 "is_configured": true, 00:35:51.784 "data_offset": 2048, 00:35:51.784 "data_size": 63488 00:35:51.784 }, 00:35:51.784 { 00:35:51.784 "name": "BaseBdev4", 00:35:51.784 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:51.784 "is_configured": true, 00:35:51.784 "data_offset": 2048, 00:35:51.784 "data_size": 63488 00:35:51.784 } 00:35:51.784 ] 00:35:51.784 }' 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:51.784 [2024-10-09 14:04:58.308291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:51.784 [2024-10-09 14:04:58.308484] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:35:51.784 [2024-10-09 14:04:58.308505] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:51.784 request: 00:35:51.784 { 00:35:51.784 "base_bdev": "BaseBdev1", 00:35:51.784 "raid_bdev": "raid_bdev1", 00:35:51.784 "method": "bdev_raid_add_base_bdev", 00:35:51.784 "req_id": 1 00:35:51.784 } 00:35:51.784 Got JSON-RPC error response 00:35:51.784 response: 00:35:51.784 { 00:35:51.784 "code": -22, 00:35:51.784 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:51.784 } 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:51.784 14:04:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:53.161 "name": "raid_bdev1", 00:35:53.161 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:53.161 "strip_size_kb": 0, 00:35:53.161 "state": "online", 00:35:53.161 "raid_level": "raid1", 00:35:53.161 "superblock": true, 00:35:53.161 "num_base_bdevs": 4, 00:35:53.161 "num_base_bdevs_discovered": 2, 00:35:53.161 "num_base_bdevs_operational": 2, 00:35:53.161 "base_bdevs_list": [ 00:35:53.161 { 00:35:53.161 "name": null, 00:35:53.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.161 "is_configured": false, 00:35:53.161 "data_offset": 0, 00:35:53.161 "data_size": 63488 00:35:53.161 }, 00:35:53.161 { 00:35:53.161 "name": null, 00:35:53.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.161 "is_configured": false, 00:35:53.161 "data_offset": 2048, 00:35:53.161 "data_size": 63488 00:35:53.161 }, 00:35:53.161 { 00:35:53.161 "name": "BaseBdev3", 00:35:53.161 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:53.161 "is_configured": true, 00:35:53.161 "data_offset": 2048, 00:35:53.161 "data_size": 63488 00:35:53.161 }, 00:35:53.161 { 00:35:53.161 "name": "BaseBdev4", 00:35:53.161 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:53.161 "is_configured": true, 00:35:53.161 "data_offset": 2048, 00:35:53.161 "data_size": 63488 00:35:53.161 } 00:35:53.161 ] 00:35:53.161 }' 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:53.161 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:53.420 "name": "raid_bdev1", 00:35:53.420 "uuid": "38ac4560-cb16-41e5-a41f-408d3568a699", 00:35:53.420 "strip_size_kb": 0, 00:35:53.420 "state": "online", 00:35:53.420 "raid_level": "raid1", 00:35:53.420 "superblock": true, 00:35:53.420 "num_base_bdevs": 4, 00:35:53.420 "num_base_bdevs_discovered": 2, 00:35:53.420 "num_base_bdevs_operational": 2, 00:35:53.420 "base_bdevs_list": [ 00:35:53.420 { 00:35:53.420 "name": null, 00:35:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.420 "is_configured": false, 00:35:53.420 "data_offset": 0, 00:35:53.420 "data_size": 63488 00:35:53.420 }, 00:35:53.420 { 00:35:53.420 "name": null, 00:35:53.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.420 "is_configured": false, 00:35:53.420 "data_offset": 2048, 00:35:53.420 "data_size": 63488 00:35:53.420 }, 00:35:53.420 { 00:35:53.420 "name": "BaseBdev3", 00:35:53.420 "uuid": "d36defc8-40b0-502e-a125-c2e7a843b32a", 00:35:53.420 "is_configured": true, 00:35:53.420 "data_offset": 2048, 00:35:53.420 "data_size": 63488 00:35:53.420 }, 00:35:53.420 { 00:35:53.420 "name": "BaseBdev4", 00:35:53.420 "uuid": "415656d7-b133-5225-8055-326cb070ca5c", 00:35:53.420 "is_configured": true, 00:35:53.420 "data_offset": 2048, 00:35:53.420 "data_size": 63488 00:35:53.420 } 00:35:53.420 ] 00:35:53.420 }' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 90157 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 90157 ']' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 90157 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90157 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:53.420 killing process with pid 90157 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90157' 00:35:53.420 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 90157 00:35:53.420 Received shutdown signal, test time was about 18.104738 seconds 00:35:53.420 00:35:53.420 Latency(us) 00:35:53.420 [2024-10-09T14:04:59.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.420 [2024-10-09T14:04:59.972Z] =================================================================================================================== 00:35:53.421 [2024-10-09T14:04:59.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.421 [2024-10-09 14:04:59.954353] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:53.421 14:04:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 90157 00:35:53.421 [2024-10-09 14:04:59.954526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:53.421 [2024-10-09 14:04:59.954621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:53.421 [2024-10-09 14:04:59.954639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:35:53.680 [2024-10-09 14:05:00.002800] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:53.937 14:05:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:35:53.937 00:35:53.937 real 0m20.277s 00:35:53.937 user 0m27.162s 00:35:53.937 sys 0m2.868s 00:35:53.937 14:05:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.937 14:05:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:35:53.937 ************************************ 00:35:53.937 END TEST raid_rebuild_test_sb_io 00:35:53.937 ************************************ 00:35:53.937 14:05:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:35:53.937 14:05:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:35:53.937 14:05:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:35:53.937 14:05:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.937 14:05:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:53.937 ************************************ 00:35:53.937 START TEST raid5f_state_function_test 00:35:53.937 ************************************ 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90872 00:35:53.937 Process raid pid: 90872 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90872' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90872 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90872 ']' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.937 14:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:53.937 [2024-10-09 14:05:00.460842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:35:53.937 [2024-10-09 14:05:00.461026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.195 [2024-10-09 14:05:00.641999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.195 [2024-10-09 14:05:00.686606] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.195 [2024-10-09 14:05:00.731767] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:54.195 [2024-10-09 14:05:00.731805] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.130 [2024-10-09 14:05:01.383622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:55.130 [2024-10-09 14:05:01.383682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:55.130 [2024-10-09 14:05:01.383701] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:55.130 [2024-10-09 14:05:01.383717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:55.130 [2024-10-09 14:05:01.383726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:55.130 [2024-10-09 14:05:01.383757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:55.130 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:55.131 "name": "Existed_Raid", 00:35:55.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.131 "strip_size_kb": 64, 00:35:55.131 "state": "configuring", 00:35:55.131 "raid_level": "raid5f", 00:35:55.131 "superblock": false, 00:35:55.131 "num_base_bdevs": 3, 00:35:55.131 "num_base_bdevs_discovered": 0, 00:35:55.131 "num_base_bdevs_operational": 3, 00:35:55.131 "base_bdevs_list": [ 00:35:55.131 { 00:35:55.131 "name": "BaseBdev1", 00:35:55.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.131 "is_configured": false, 00:35:55.131 "data_offset": 0, 00:35:55.131 "data_size": 0 00:35:55.131 }, 00:35:55.131 { 00:35:55.131 "name": "BaseBdev2", 00:35:55.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.131 "is_configured": false, 00:35:55.131 "data_offset": 0, 00:35:55.131 "data_size": 0 00:35:55.131 }, 00:35:55.131 { 00:35:55.131 "name": "BaseBdev3", 00:35:55.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.131 "is_configured": false, 00:35:55.131 "data_offset": 0, 00:35:55.131 "data_size": 0 00:35:55.131 } 00:35:55.131 ] 00:35:55.131 }' 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:55.131 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 [2024-10-09 14:05:01.851622] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:55.390 [2024-10-09 14:05:01.851668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 [2024-10-09 14:05:01.859662] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:55.390 [2024-10-09 14:05:01.859705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:55.390 [2024-10-09 14:05:01.859717] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:55.390 [2024-10-09 14:05:01.859732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:55.390 [2024-10-09 14:05:01.859741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:55.390 [2024-10-09 14:05:01.859755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 [2024-10-09 14:05:01.877385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:55.390 BaseBdev1 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 [ 00:35:55.390 { 00:35:55.390 "name": "BaseBdev1", 00:35:55.390 "aliases": [ 00:35:55.390 "5c0088d1-3d7d-41a6-8492-9632c2aa339b" 00:35:55.390 ], 00:35:55.390 "product_name": "Malloc disk", 00:35:55.390 "block_size": 512, 00:35:55.390 "num_blocks": 65536, 00:35:55.390 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:55.390 "assigned_rate_limits": { 00:35:55.390 "rw_ios_per_sec": 0, 00:35:55.390 "rw_mbytes_per_sec": 0, 00:35:55.390 "r_mbytes_per_sec": 0, 00:35:55.390 "w_mbytes_per_sec": 0 00:35:55.390 }, 00:35:55.390 "claimed": true, 00:35:55.390 "claim_type": "exclusive_write", 00:35:55.390 "zoned": false, 00:35:55.390 "supported_io_types": { 00:35:55.390 "read": true, 00:35:55.390 "write": true, 00:35:55.390 "unmap": true, 00:35:55.390 "flush": true, 00:35:55.390 "reset": true, 00:35:55.390 "nvme_admin": false, 00:35:55.390 "nvme_io": false, 00:35:55.390 "nvme_io_md": false, 00:35:55.390 "write_zeroes": true, 00:35:55.390 "zcopy": true, 00:35:55.390 "get_zone_info": false, 00:35:55.390 "zone_management": false, 00:35:55.390 "zone_append": false, 00:35:55.390 "compare": false, 00:35:55.390 "compare_and_write": false, 00:35:55.390 "abort": true, 00:35:55.390 "seek_hole": false, 00:35:55.390 "seek_data": false, 00:35:55.390 "copy": true, 00:35:55.390 "nvme_iov_md": false 00:35:55.390 }, 00:35:55.390 "memory_domains": [ 00:35:55.390 { 00:35:55.390 "dma_device_id": "system", 00:35:55.390 "dma_device_type": 1 00:35:55.390 }, 00:35:55.390 { 00:35:55.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.390 "dma_device_type": 2 00:35:55.390 } 00:35:55.390 ], 00:35:55.390 "driver_specific": {} 00:35:55.390 } 00:35:55.390 ] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.390 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.649 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:55.649 "name": "Existed_Raid", 00:35:55.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.649 "strip_size_kb": 64, 00:35:55.649 "state": "configuring", 00:35:55.649 "raid_level": "raid5f", 00:35:55.649 "superblock": false, 00:35:55.649 "num_base_bdevs": 3, 00:35:55.649 "num_base_bdevs_discovered": 1, 00:35:55.649 "num_base_bdevs_operational": 3, 00:35:55.649 "base_bdevs_list": [ 00:35:55.649 { 00:35:55.649 "name": "BaseBdev1", 00:35:55.649 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:55.649 "is_configured": true, 00:35:55.649 "data_offset": 0, 00:35:55.649 "data_size": 65536 00:35:55.649 }, 00:35:55.649 { 00:35:55.649 "name": "BaseBdev2", 00:35:55.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.649 "is_configured": false, 00:35:55.649 "data_offset": 0, 00:35:55.649 "data_size": 0 00:35:55.649 }, 00:35:55.649 { 00:35:55.649 "name": "BaseBdev3", 00:35:55.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.649 "is_configured": false, 00:35:55.649 "data_offset": 0, 00:35:55.649 "data_size": 0 00:35:55.649 } 00:35:55.649 ] 00:35:55.649 }' 00:35:55.649 14:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:55.649 14:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.908 [2024-10-09 14:05:02.361567] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:55.908 [2024-10-09 14:05:02.361624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.908 [2024-10-09 14:05:02.369593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:55.908 [2024-10-09 14:05:02.371813] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:55.908 [2024-10-09 14:05:02.371858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:55.908 [2024-10-09 14:05:02.371870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:55.908 [2024-10-09 14:05:02.371884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:55.908 "name": "Existed_Raid", 00:35:55.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.908 "strip_size_kb": 64, 00:35:55.908 "state": "configuring", 00:35:55.908 "raid_level": "raid5f", 00:35:55.908 "superblock": false, 00:35:55.908 "num_base_bdevs": 3, 00:35:55.908 "num_base_bdevs_discovered": 1, 00:35:55.908 "num_base_bdevs_operational": 3, 00:35:55.908 "base_bdevs_list": [ 00:35:55.908 { 00:35:55.908 "name": "BaseBdev1", 00:35:55.908 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:55.908 "is_configured": true, 00:35:55.908 "data_offset": 0, 00:35:55.908 "data_size": 65536 00:35:55.908 }, 00:35:55.908 { 00:35:55.908 "name": "BaseBdev2", 00:35:55.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.908 "is_configured": false, 00:35:55.908 "data_offset": 0, 00:35:55.908 "data_size": 0 00:35:55.908 }, 00:35:55.908 { 00:35:55.908 "name": "BaseBdev3", 00:35:55.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:55.908 "is_configured": false, 00:35:55.908 "data_offset": 0, 00:35:55.908 "data_size": 0 00:35:55.908 } 00:35:55.908 ] 00:35:55.908 }' 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:55.908 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.476 [2024-10-09 14:05:02.854255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:56.476 BaseBdev2 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.476 [ 00:35:56.476 { 00:35:56.476 "name": "BaseBdev2", 00:35:56.476 "aliases": [ 00:35:56.476 "54ad98c1-535b-48f7-b48b-fbb7b17d5793" 00:35:56.476 ], 00:35:56.476 "product_name": "Malloc disk", 00:35:56.476 "block_size": 512, 00:35:56.476 "num_blocks": 65536, 00:35:56.476 "uuid": "54ad98c1-535b-48f7-b48b-fbb7b17d5793", 00:35:56.476 "assigned_rate_limits": { 00:35:56.476 "rw_ios_per_sec": 0, 00:35:56.476 "rw_mbytes_per_sec": 0, 00:35:56.476 "r_mbytes_per_sec": 0, 00:35:56.476 "w_mbytes_per_sec": 0 00:35:56.476 }, 00:35:56.476 "claimed": true, 00:35:56.476 "claim_type": "exclusive_write", 00:35:56.476 "zoned": false, 00:35:56.476 "supported_io_types": { 00:35:56.476 "read": true, 00:35:56.476 "write": true, 00:35:56.476 "unmap": true, 00:35:56.476 "flush": true, 00:35:56.476 "reset": true, 00:35:56.476 "nvme_admin": false, 00:35:56.476 "nvme_io": false, 00:35:56.476 "nvme_io_md": false, 00:35:56.476 "write_zeroes": true, 00:35:56.476 "zcopy": true, 00:35:56.476 "get_zone_info": false, 00:35:56.476 "zone_management": false, 00:35:56.476 "zone_append": false, 00:35:56.476 "compare": false, 00:35:56.476 "compare_and_write": false, 00:35:56.476 "abort": true, 00:35:56.476 "seek_hole": false, 00:35:56.476 "seek_data": false, 00:35:56.476 "copy": true, 00:35:56.476 "nvme_iov_md": false 00:35:56.476 }, 00:35:56.476 "memory_domains": [ 00:35:56.476 { 00:35:56.476 "dma_device_id": "system", 00:35:56.476 "dma_device_type": 1 00:35:56.476 }, 00:35:56.476 { 00:35:56.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:56.476 "dma_device_type": 2 00:35:56.476 } 00:35:56.476 ], 00:35:56.476 "driver_specific": {} 00:35:56.476 } 00:35:56.476 ] 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:56.476 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:56.477 "name": "Existed_Raid", 00:35:56.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:56.477 "strip_size_kb": 64, 00:35:56.477 "state": "configuring", 00:35:56.477 "raid_level": "raid5f", 00:35:56.477 "superblock": false, 00:35:56.477 "num_base_bdevs": 3, 00:35:56.477 "num_base_bdevs_discovered": 2, 00:35:56.477 "num_base_bdevs_operational": 3, 00:35:56.477 "base_bdevs_list": [ 00:35:56.477 { 00:35:56.477 "name": "BaseBdev1", 00:35:56.477 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:56.477 "is_configured": true, 00:35:56.477 "data_offset": 0, 00:35:56.477 "data_size": 65536 00:35:56.477 }, 00:35:56.477 { 00:35:56.477 "name": "BaseBdev2", 00:35:56.477 "uuid": "54ad98c1-535b-48f7-b48b-fbb7b17d5793", 00:35:56.477 "is_configured": true, 00:35:56.477 "data_offset": 0, 00:35:56.477 "data_size": 65536 00:35:56.477 }, 00:35:56.477 { 00:35:56.477 "name": "BaseBdev3", 00:35:56.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:56.477 "is_configured": false, 00:35:56.477 "data_offset": 0, 00:35:56.477 "data_size": 0 00:35:56.477 } 00:35:56.477 ] 00:35:56.477 }' 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:56.477 14:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.043 [2024-10-09 14:05:03.345907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:57.043 [2024-10-09 14:05:03.345974] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:35:57.043 [2024-10-09 14:05:03.345990] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:35:57.043 [2024-10-09 14:05:03.346324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:35:57.043 [2024-10-09 14:05:03.346841] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:35:57.043 [2024-10-09 14:05:03.346864] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:35:57.043 [2024-10-09 14:05:03.347098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:57.043 BaseBdev3 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:57.043 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.044 [ 00:35:57.044 { 00:35:57.044 "name": "BaseBdev3", 00:35:57.044 "aliases": [ 00:35:57.044 "67633186-c6e3-498e-8466-31e658414717" 00:35:57.044 ], 00:35:57.044 "product_name": "Malloc disk", 00:35:57.044 "block_size": 512, 00:35:57.044 "num_blocks": 65536, 00:35:57.044 "uuid": "67633186-c6e3-498e-8466-31e658414717", 00:35:57.044 "assigned_rate_limits": { 00:35:57.044 "rw_ios_per_sec": 0, 00:35:57.044 "rw_mbytes_per_sec": 0, 00:35:57.044 "r_mbytes_per_sec": 0, 00:35:57.044 "w_mbytes_per_sec": 0 00:35:57.044 }, 00:35:57.044 "claimed": true, 00:35:57.044 "claim_type": "exclusive_write", 00:35:57.044 "zoned": false, 00:35:57.044 "supported_io_types": { 00:35:57.044 "read": true, 00:35:57.044 "write": true, 00:35:57.044 "unmap": true, 00:35:57.044 "flush": true, 00:35:57.044 "reset": true, 00:35:57.044 "nvme_admin": false, 00:35:57.044 "nvme_io": false, 00:35:57.044 "nvme_io_md": false, 00:35:57.044 "write_zeroes": true, 00:35:57.044 "zcopy": true, 00:35:57.044 "get_zone_info": false, 00:35:57.044 "zone_management": false, 00:35:57.044 "zone_append": false, 00:35:57.044 "compare": false, 00:35:57.044 "compare_and_write": false, 00:35:57.044 "abort": true, 00:35:57.044 "seek_hole": false, 00:35:57.044 "seek_data": false, 00:35:57.044 "copy": true, 00:35:57.044 "nvme_iov_md": false 00:35:57.044 }, 00:35:57.044 "memory_domains": [ 00:35:57.044 { 00:35:57.044 "dma_device_id": "system", 00:35:57.044 "dma_device_type": 1 00:35:57.044 }, 00:35:57.044 { 00:35:57.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:57.044 "dma_device_type": 2 00:35:57.044 } 00:35:57.044 ], 00:35:57.044 "driver_specific": {} 00:35:57.044 } 00:35:57.044 ] 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:57.044 "name": "Existed_Raid", 00:35:57.044 "uuid": "768b9cf7-6ff6-434e-afae-44428a8a51bf", 00:35:57.044 "strip_size_kb": 64, 00:35:57.044 "state": "online", 00:35:57.044 "raid_level": "raid5f", 00:35:57.044 "superblock": false, 00:35:57.044 "num_base_bdevs": 3, 00:35:57.044 "num_base_bdevs_discovered": 3, 00:35:57.044 "num_base_bdevs_operational": 3, 00:35:57.044 "base_bdevs_list": [ 00:35:57.044 { 00:35:57.044 "name": "BaseBdev1", 00:35:57.044 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:57.044 "is_configured": true, 00:35:57.044 "data_offset": 0, 00:35:57.044 "data_size": 65536 00:35:57.044 }, 00:35:57.044 { 00:35:57.044 "name": "BaseBdev2", 00:35:57.044 "uuid": "54ad98c1-535b-48f7-b48b-fbb7b17d5793", 00:35:57.044 "is_configured": true, 00:35:57.044 "data_offset": 0, 00:35:57.044 "data_size": 65536 00:35:57.044 }, 00:35:57.044 { 00:35:57.044 "name": "BaseBdev3", 00:35:57.044 "uuid": "67633186-c6e3-498e-8466-31e658414717", 00:35:57.044 "is_configured": true, 00:35:57.044 "data_offset": 0, 00:35:57.044 "data_size": 65536 00:35:57.044 } 00:35:57.044 ] 00:35:57.044 }' 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:57.044 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:57.612 [2024-10-09 14:05:03.870324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:57.612 "name": "Existed_Raid", 00:35:57.612 "aliases": [ 00:35:57.612 "768b9cf7-6ff6-434e-afae-44428a8a51bf" 00:35:57.612 ], 00:35:57.612 "product_name": "Raid Volume", 00:35:57.612 "block_size": 512, 00:35:57.612 "num_blocks": 131072, 00:35:57.612 "uuid": "768b9cf7-6ff6-434e-afae-44428a8a51bf", 00:35:57.612 "assigned_rate_limits": { 00:35:57.612 "rw_ios_per_sec": 0, 00:35:57.612 "rw_mbytes_per_sec": 0, 00:35:57.612 "r_mbytes_per_sec": 0, 00:35:57.612 "w_mbytes_per_sec": 0 00:35:57.612 }, 00:35:57.612 "claimed": false, 00:35:57.612 "zoned": false, 00:35:57.612 "supported_io_types": { 00:35:57.612 "read": true, 00:35:57.612 "write": true, 00:35:57.612 "unmap": false, 00:35:57.612 "flush": false, 00:35:57.612 "reset": true, 00:35:57.612 "nvme_admin": false, 00:35:57.612 "nvme_io": false, 00:35:57.612 "nvme_io_md": false, 00:35:57.612 "write_zeroes": true, 00:35:57.612 "zcopy": false, 00:35:57.612 "get_zone_info": false, 00:35:57.612 "zone_management": false, 00:35:57.612 "zone_append": false, 00:35:57.612 "compare": false, 00:35:57.612 "compare_and_write": false, 00:35:57.612 "abort": false, 00:35:57.612 "seek_hole": false, 00:35:57.612 "seek_data": false, 00:35:57.612 "copy": false, 00:35:57.612 "nvme_iov_md": false 00:35:57.612 }, 00:35:57.612 "driver_specific": { 00:35:57.612 "raid": { 00:35:57.612 "uuid": "768b9cf7-6ff6-434e-afae-44428a8a51bf", 00:35:57.612 "strip_size_kb": 64, 00:35:57.612 "state": "online", 00:35:57.612 "raid_level": "raid5f", 00:35:57.612 "superblock": false, 00:35:57.612 "num_base_bdevs": 3, 00:35:57.612 "num_base_bdevs_discovered": 3, 00:35:57.612 "num_base_bdevs_operational": 3, 00:35:57.612 "base_bdevs_list": [ 00:35:57.612 { 00:35:57.612 "name": "BaseBdev1", 00:35:57.612 "uuid": "5c0088d1-3d7d-41a6-8492-9632c2aa339b", 00:35:57.612 "is_configured": true, 00:35:57.612 "data_offset": 0, 00:35:57.612 "data_size": 65536 00:35:57.612 }, 00:35:57.612 { 00:35:57.612 "name": "BaseBdev2", 00:35:57.612 "uuid": "54ad98c1-535b-48f7-b48b-fbb7b17d5793", 00:35:57.612 "is_configured": true, 00:35:57.612 "data_offset": 0, 00:35:57.612 "data_size": 65536 00:35:57.612 }, 00:35:57.612 { 00:35:57.612 "name": "BaseBdev3", 00:35:57.612 "uuid": "67633186-c6e3-498e-8466-31e658414717", 00:35:57.612 "is_configured": true, 00:35:57.612 "data_offset": 0, 00:35:57.612 "data_size": 65536 00:35:57.612 } 00:35:57.612 ] 00:35:57.612 } 00:35:57.612 } 00:35:57.612 }' 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:57.612 BaseBdev2 00:35:57.612 BaseBdev3' 00:35:57.612 14:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.612 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.613 [2024-10-09 14:05:04.146247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:57.613 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:57.933 "name": "Existed_Raid", 00:35:57.933 "uuid": "768b9cf7-6ff6-434e-afae-44428a8a51bf", 00:35:57.933 "strip_size_kb": 64, 00:35:57.933 "state": "online", 00:35:57.933 "raid_level": "raid5f", 00:35:57.933 "superblock": false, 00:35:57.933 "num_base_bdevs": 3, 00:35:57.933 "num_base_bdevs_discovered": 2, 00:35:57.933 "num_base_bdevs_operational": 2, 00:35:57.933 "base_bdevs_list": [ 00:35:57.933 { 00:35:57.933 "name": null, 00:35:57.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.933 "is_configured": false, 00:35:57.933 "data_offset": 0, 00:35:57.933 "data_size": 65536 00:35:57.933 }, 00:35:57.933 { 00:35:57.933 "name": "BaseBdev2", 00:35:57.933 "uuid": "54ad98c1-535b-48f7-b48b-fbb7b17d5793", 00:35:57.933 "is_configured": true, 00:35:57.933 "data_offset": 0, 00:35:57.933 "data_size": 65536 00:35:57.933 }, 00:35:57.933 { 00:35:57.933 "name": "BaseBdev3", 00:35:57.933 "uuid": "67633186-c6e3-498e-8466-31e658414717", 00:35:57.933 "is_configured": true, 00:35:57.933 "data_offset": 0, 00:35:57.933 "data_size": 65536 00:35:57.933 } 00:35:57.933 ] 00:35:57.933 }' 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:57.933 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.192 [2024-10-09 14:05:04.679002] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:58.192 [2024-10-09 14:05:04.679116] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:58.192 [2024-10-09 14:05:04.691523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:58.192 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 [2024-10-09 14:05:04.747610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:58.451 [2024-10-09 14:05:04.747665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 BaseBdev2 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.451 [ 00:35:58.451 { 00:35:58.451 "name": "BaseBdev2", 00:35:58.451 "aliases": [ 00:35:58.451 "317b9d0c-e55a-4836-8510-7b6ff6db5fe8" 00:35:58.451 ], 00:35:58.451 "product_name": "Malloc disk", 00:35:58.451 "block_size": 512, 00:35:58.451 "num_blocks": 65536, 00:35:58.451 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:35:58.451 "assigned_rate_limits": { 00:35:58.451 "rw_ios_per_sec": 0, 00:35:58.451 "rw_mbytes_per_sec": 0, 00:35:58.451 "r_mbytes_per_sec": 0, 00:35:58.451 "w_mbytes_per_sec": 0 00:35:58.451 }, 00:35:58.451 "claimed": false, 00:35:58.451 "zoned": false, 00:35:58.451 "supported_io_types": { 00:35:58.451 "read": true, 00:35:58.451 "write": true, 00:35:58.451 "unmap": true, 00:35:58.451 "flush": true, 00:35:58.451 "reset": true, 00:35:58.451 "nvme_admin": false, 00:35:58.451 "nvme_io": false, 00:35:58.451 "nvme_io_md": false, 00:35:58.451 "write_zeroes": true, 00:35:58.451 "zcopy": true, 00:35:58.451 "get_zone_info": false, 00:35:58.451 "zone_management": false, 00:35:58.451 "zone_append": false, 00:35:58.451 "compare": false, 00:35:58.451 "compare_and_write": false, 00:35:58.451 "abort": true, 00:35:58.451 "seek_hole": false, 00:35:58.451 "seek_data": false, 00:35:58.451 "copy": true, 00:35:58.451 "nvme_iov_md": false 00:35:58.451 }, 00:35:58.451 "memory_domains": [ 00:35:58.451 { 00:35:58.451 "dma_device_id": "system", 00:35:58.451 "dma_device_type": 1 00:35:58.451 }, 00:35:58.451 { 00:35:58.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:58.451 "dma_device_type": 2 00:35:58.451 } 00:35:58.451 ], 00:35:58.451 "driver_specific": {} 00:35:58.451 } 00:35:58.451 ] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.451 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 BaseBdev3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 [ 00:35:58.452 { 00:35:58.452 "name": "BaseBdev3", 00:35:58.452 "aliases": [ 00:35:58.452 "4f67e581-0172-4ba5-be90-9265ceeb0955" 00:35:58.452 ], 00:35:58.452 "product_name": "Malloc disk", 00:35:58.452 "block_size": 512, 00:35:58.452 "num_blocks": 65536, 00:35:58.452 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:35:58.452 "assigned_rate_limits": { 00:35:58.452 "rw_ios_per_sec": 0, 00:35:58.452 "rw_mbytes_per_sec": 0, 00:35:58.452 "r_mbytes_per_sec": 0, 00:35:58.452 "w_mbytes_per_sec": 0 00:35:58.452 }, 00:35:58.452 "claimed": false, 00:35:58.452 "zoned": false, 00:35:58.452 "supported_io_types": { 00:35:58.452 "read": true, 00:35:58.452 "write": true, 00:35:58.452 "unmap": true, 00:35:58.452 "flush": true, 00:35:58.452 "reset": true, 00:35:58.452 "nvme_admin": false, 00:35:58.452 "nvme_io": false, 00:35:58.452 "nvme_io_md": false, 00:35:58.452 "write_zeroes": true, 00:35:58.452 "zcopy": true, 00:35:58.452 "get_zone_info": false, 00:35:58.452 "zone_management": false, 00:35:58.452 "zone_append": false, 00:35:58.452 "compare": false, 00:35:58.452 "compare_and_write": false, 00:35:58.452 "abort": true, 00:35:58.452 "seek_hole": false, 00:35:58.452 "seek_data": false, 00:35:58.452 "copy": true, 00:35:58.452 "nvme_iov_md": false 00:35:58.452 }, 00:35:58.452 "memory_domains": [ 00:35:58.452 { 00:35:58.452 "dma_device_id": "system", 00:35:58.452 "dma_device_type": 1 00:35:58.452 }, 00:35:58.452 { 00:35:58.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:58.452 "dma_device_type": 2 00:35:58.452 } 00:35:58.452 ], 00:35:58.452 "driver_specific": {} 00:35:58.452 } 00:35:58.452 ] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 [2024-10-09 14:05:04.919111] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:58.452 [2024-10-09 14:05:04.919159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:58.452 [2024-10-09 14:05:04.919184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:58.452 [2024-10-09 14:05:04.921464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:58.452 "name": "Existed_Raid", 00:35:58.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.452 "strip_size_kb": 64, 00:35:58.452 "state": "configuring", 00:35:58.452 "raid_level": "raid5f", 00:35:58.452 "superblock": false, 00:35:58.452 "num_base_bdevs": 3, 00:35:58.452 "num_base_bdevs_discovered": 2, 00:35:58.452 "num_base_bdevs_operational": 3, 00:35:58.452 "base_bdevs_list": [ 00:35:58.452 { 00:35:58.452 "name": "BaseBdev1", 00:35:58.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:58.452 "is_configured": false, 00:35:58.452 "data_offset": 0, 00:35:58.452 "data_size": 0 00:35:58.452 }, 00:35:58.452 { 00:35:58.452 "name": "BaseBdev2", 00:35:58.452 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:35:58.452 "is_configured": true, 00:35:58.452 "data_offset": 0, 00:35:58.452 "data_size": 65536 00:35:58.452 }, 00:35:58.452 { 00:35:58.452 "name": "BaseBdev3", 00:35:58.452 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:35:58.452 "is_configured": true, 00:35:58.452 "data_offset": 0, 00:35:58.452 "data_size": 65536 00:35:58.452 } 00:35:58.452 ] 00:35:58.452 }' 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:58.452 14:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.020 [2024-10-09 14:05:05.371171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:59.020 "name": "Existed_Raid", 00:35:59.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.020 "strip_size_kb": 64, 00:35:59.020 "state": "configuring", 00:35:59.020 "raid_level": "raid5f", 00:35:59.020 "superblock": false, 00:35:59.020 "num_base_bdevs": 3, 00:35:59.020 "num_base_bdevs_discovered": 1, 00:35:59.020 "num_base_bdevs_operational": 3, 00:35:59.020 "base_bdevs_list": [ 00:35:59.020 { 00:35:59.020 "name": "BaseBdev1", 00:35:59.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.020 "is_configured": false, 00:35:59.020 "data_offset": 0, 00:35:59.020 "data_size": 0 00:35:59.020 }, 00:35:59.020 { 00:35:59.020 "name": null, 00:35:59.020 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:35:59.020 "is_configured": false, 00:35:59.020 "data_offset": 0, 00:35:59.020 "data_size": 65536 00:35:59.020 }, 00:35:59.020 { 00:35:59.020 "name": "BaseBdev3", 00:35:59.020 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:35:59.020 "is_configured": true, 00:35:59.020 "data_offset": 0, 00:35:59.020 "data_size": 65536 00:35:59.020 } 00:35:59.020 ] 00:35:59.020 }' 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:59.020 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 [2024-10-09 14:05:05.890491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:59.588 BaseBdev1 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 [ 00:35:59.588 { 00:35:59.588 "name": "BaseBdev1", 00:35:59.588 "aliases": [ 00:35:59.588 "502e041d-4708-4ae6-bc38-56eb051a12b0" 00:35:59.588 ], 00:35:59.588 "product_name": "Malloc disk", 00:35:59.588 "block_size": 512, 00:35:59.588 "num_blocks": 65536, 00:35:59.588 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:35:59.588 "assigned_rate_limits": { 00:35:59.588 "rw_ios_per_sec": 0, 00:35:59.588 "rw_mbytes_per_sec": 0, 00:35:59.588 "r_mbytes_per_sec": 0, 00:35:59.588 "w_mbytes_per_sec": 0 00:35:59.588 }, 00:35:59.588 "claimed": true, 00:35:59.588 "claim_type": "exclusive_write", 00:35:59.588 "zoned": false, 00:35:59.588 "supported_io_types": { 00:35:59.588 "read": true, 00:35:59.588 "write": true, 00:35:59.588 "unmap": true, 00:35:59.588 "flush": true, 00:35:59.588 "reset": true, 00:35:59.588 "nvme_admin": false, 00:35:59.588 "nvme_io": false, 00:35:59.588 "nvme_io_md": false, 00:35:59.588 "write_zeroes": true, 00:35:59.588 "zcopy": true, 00:35:59.588 "get_zone_info": false, 00:35:59.588 "zone_management": false, 00:35:59.588 "zone_append": false, 00:35:59.588 "compare": false, 00:35:59.588 "compare_and_write": false, 00:35:59.588 "abort": true, 00:35:59.588 "seek_hole": false, 00:35:59.588 "seek_data": false, 00:35:59.588 "copy": true, 00:35:59.588 "nvme_iov_md": false 00:35:59.588 }, 00:35:59.588 "memory_domains": [ 00:35:59.588 { 00:35:59.588 "dma_device_id": "system", 00:35:59.588 "dma_device_type": 1 00:35:59.588 }, 00:35:59.588 { 00:35:59.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:59.588 "dma_device_type": 2 00:35:59.588 } 00:35:59.588 ], 00:35:59.588 "driver_specific": {} 00:35:59.588 } 00:35:59.588 ] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:59.588 "name": "Existed_Raid", 00:35:59.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.588 "strip_size_kb": 64, 00:35:59.588 "state": "configuring", 00:35:59.588 "raid_level": "raid5f", 00:35:59.588 "superblock": false, 00:35:59.588 "num_base_bdevs": 3, 00:35:59.588 "num_base_bdevs_discovered": 2, 00:35:59.588 "num_base_bdevs_operational": 3, 00:35:59.588 "base_bdevs_list": [ 00:35:59.588 { 00:35:59.588 "name": "BaseBdev1", 00:35:59.588 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:35:59.588 "is_configured": true, 00:35:59.588 "data_offset": 0, 00:35:59.588 "data_size": 65536 00:35:59.588 }, 00:35:59.588 { 00:35:59.588 "name": null, 00:35:59.588 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:35:59.588 "is_configured": false, 00:35:59.588 "data_offset": 0, 00:35:59.588 "data_size": 65536 00:35:59.588 }, 00:35:59.588 { 00:35:59.588 "name": "BaseBdev3", 00:35:59.588 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:35:59.588 "is_configured": true, 00:35:59.588 "data_offset": 0, 00:35:59.588 "data_size": 65536 00:35:59.588 } 00:35:59.588 ] 00:35:59.588 }' 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:59.588 14:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.846 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:59.847 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:59.847 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:59.847 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:59.847 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.105 [2024-10-09 14:05:06.406683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:00.105 "name": "Existed_Raid", 00:36:00.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.105 "strip_size_kb": 64, 00:36:00.105 "state": "configuring", 00:36:00.105 "raid_level": "raid5f", 00:36:00.105 "superblock": false, 00:36:00.105 "num_base_bdevs": 3, 00:36:00.105 "num_base_bdevs_discovered": 1, 00:36:00.105 "num_base_bdevs_operational": 3, 00:36:00.105 "base_bdevs_list": [ 00:36:00.105 { 00:36:00.105 "name": "BaseBdev1", 00:36:00.105 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:00.105 "is_configured": true, 00:36:00.105 "data_offset": 0, 00:36:00.105 "data_size": 65536 00:36:00.105 }, 00:36:00.105 { 00:36:00.105 "name": null, 00:36:00.105 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:00.105 "is_configured": false, 00:36:00.105 "data_offset": 0, 00:36:00.105 "data_size": 65536 00:36:00.105 }, 00:36:00.105 { 00:36:00.105 "name": null, 00:36:00.105 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:00.105 "is_configured": false, 00:36:00.105 "data_offset": 0, 00:36:00.105 "data_size": 65536 00:36:00.105 } 00:36:00.105 ] 00:36:00.105 }' 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:00.105 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.365 [2024-10-09 14:05:06.894830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.365 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.623 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.623 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:00.623 "name": "Existed_Raid", 00:36:00.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:00.623 "strip_size_kb": 64, 00:36:00.623 "state": "configuring", 00:36:00.623 "raid_level": "raid5f", 00:36:00.623 "superblock": false, 00:36:00.623 "num_base_bdevs": 3, 00:36:00.623 "num_base_bdevs_discovered": 2, 00:36:00.623 "num_base_bdevs_operational": 3, 00:36:00.623 "base_bdevs_list": [ 00:36:00.623 { 00:36:00.623 "name": "BaseBdev1", 00:36:00.623 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:00.623 "is_configured": true, 00:36:00.623 "data_offset": 0, 00:36:00.623 "data_size": 65536 00:36:00.623 }, 00:36:00.623 { 00:36:00.623 "name": null, 00:36:00.623 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:00.623 "is_configured": false, 00:36:00.623 "data_offset": 0, 00:36:00.623 "data_size": 65536 00:36:00.623 }, 00:36:00.623 { 00:36:00.623 "name": "BaseBdev3", 00:36:00.623 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:00.623 "is_configured": true, 00:36:00.623 "data_offset": 0, 00:36:00.623 "data_size": 65536 00:36:00.623 } 00:36:00.623 ] 00:36:00.623 }' 00:36:00.623 14:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:00.623 14:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 [2024-10-09 14:05:07.386945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:00.880 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.137 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:01.137 "name": "Existed_Raid", 00:36:01.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.137 "strip_size_kb": 64, 00:36:01.137 "state": "configuring", 00:36:01.137 "raid_level": "raid5f", 00:36:01.137 "superblock": false, 00:36:01.137 "num_base_bdevs": 3, 00:36:01.137 "num_base_bdevs_discovered": 1, 00:36:01.137 "num_base_bdevs_operational": 3, 00:36:01.137 "base_bdevs_list": [ 00:36:01.137 { 00:36:01.137 "name": null, 00:36:01.137 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:01.137 "is_configured": false, 00:36:01.137 "data_offset": 0, 00:36:01.137 "data_size": 65536 00:36:01.137 }, 00:36:01.137 { 00:36:01.137 "name": null, 00:36:01.137 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:01.137 "is_configured": false, 00:36:01.137 "data_offset": 0, 00:36:01.137 "data_size": 65536 00:36:01.137 }, 00:36:01.137 { 00:36:01.137 "name": "BaseBdev3", 00:36:01.137 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:01.137 "is_configured": true, 00:36:01.137 "data_offset": 0, 00:36:01.137 "data_size": 65536 00:36:01.137 } 00:36:01.137 ] 00:36:01.137 }' 00:36:01.137 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:01.137 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.395 [2024-10-09 14:05:07.897631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.395 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.653 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:01.653 "name": "Existed_Raid", 00:36:01.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.653 "strip_size_kb": 64, 00:36:01.653 "state": "configuring", 00:36:01.653 "raid_level": "raid5f", 00:36:01.653 "superblock": false, 00:36:01.653 "num_base_bdevs": 3, 00:36:01.653 "num_base_bdevs_discovered": 2, 00:36:01.653 "num_base_bdevs_operational": 3, 00:36:01.653 "base_bdevs_list": [ 00:36:01.653 { 00:36:01.653 "name": null, 00:36:01.653 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:01.653 "is_configured": false, 00:36:01.653 "data_offset": 0, 00:36:01.653 "data_size": 65536 00:36:01.653 }, 00:36:01.653 { 00:36:01.653 "name": "BaseBdev2", 00:36:01.653 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:01.653 "is_configured": true, 00:36:01.653 "data_offset": 0, 00:36:01.653 "data_size": 65536 00:36:01.653 }, 00:36:01.653 { 00:36:01.653 "name": "BaseBdev3", 00:36:01.653 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:01.653 "is_configured": true, 00:36:01.653 "data_offset": 0, 00:36:01.653 "data_size": 65536 00:36:01.653 } 00:36:01.653 ] 00:36:01.653 }' 00:36:01.653 14:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:01.653 14:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 502e041d-4708-4ae6-bc38-56eb051a12b0 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.911 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.169 [2024-10-09 14:05:08.468843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:02.169 [2024-10-09 14:05:08.468890] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:36:02.169 [2024-10-09 14:05:08.468903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:36:02.169 [2024-10-09 14:05:08.469185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:02.169 [2024-10-09 14:05:08.469764] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:36:02.169 [2024-10-09 14:05:08.469789] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:36:02.169 [2024-10-09 14:05:08.469992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:02.169 NewBaseBdev 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.169 [ 00:36:02.169 { 00:36:02.169 "name": "NewBaseBdev", 00:36:02.169 "aliases": [ 00:36:02.169 "502e041d-4708-4ae6-bc38-56eb051a12b0" 00:36:02.169 ], 00:36:02.169 "product_name": "Malloc disk", 00:36:02.169 "block_size": 512, 00:36:02.169 "num_blocks": 65536, 00:36:02.169 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:02.169 "assigned_rate_limits": { 00:36:02.169 "rw_ios_per_sec": 0, 00:36:02.169 "rw_mbytes_per_sec": 0, 00:36:02.169 "r_mbytes_per_sec": 0, 00:36:02.169 "w_mbytes_per_sec": 0 00:36:02.169 }, 00:36:02.169 "claimed": true, 00:36:02.169 "claim_type": "exclusive_write", 00:36:02.169 "zoned": false, 00:36:02.169 "supported_io_types": { 00:36:02.169 "read": true, 00:36:02.169 "write": true, 00:36:02.169 "unmap": true, 00:36:02.169 "flush": true, 00:36:02.169 "reset": true, 00:36:02.169 "nvme_admin": false, 00:36:02.169 "nvme_io": false, 00:36:02.169 "nvme_io_md": false, 00:36:02.169 "write_zeroes": true, 00:36:02.169 "zcopy": true, 00:36:02.169 "get_zone_info": false, 00:36:02.169 "zone_management": false, 00:36:02.169 "zone_append": false, 00:36:02.169 "compare": false, 00:36:02.169 "compare_and_write": false, 00:36:02.169 "abort": true, 00:36:02.169 "seek_hole": false, 00:36:02.169 "seek_data": false, 00:36:02.169 "copy": true, 00:36:02.169 "nvme_iov_md": false 00:36:02.169 }, 00:36:02.169 "memory_domains": [ 00:36:02.169 { 00:36:02.169 "dma_device_id": "system", 00:36:02.169 "dma_device_type": 1 00:36:02.169 }, 00:36:02.169 { 00:36:02.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:02.169 "dma_device_type": 2 00:36:02.169 } 00:36:02.169 ], 00:36:02.169 "driver_specific": {} 00:36:02.169 } 00:36:02.169 ] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:02.169 "name": "Existed_Raid", 00:36:02.169 "uuid": "a0c34752-75d4-463d-9e40-06de67ccdfbb", 00:36:02.169 "strip_size_kb": 64, 00:36:02.169 "state": "online", 00:36:02.169 "raid_level": "raid5f", 00:36:02.169 "superblock": false, 00:36:02.169 "num_base_bdevs": 3, 00:36:02.169 "num_base_bdevs_discovered": 3, 00:36:02.169 "num_base_bdevs_operational": 3, 00:36:02.169 "base_bdevs_list": [ 00:36:02.169 { 00:36:02.169 "name": "NewBaseBdev", 00:36:02.169 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:02.169 "is_configured": true, 00:36:02.169 "data_offset": 0, 00:36:02.169 "data_size": 65536 00:36:02.169 }, 00:36:02.169 { 00:36:02.169 "name": "BaseBdev2", 00:36:02.169 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:02.169 "is_configured": true, 00:36:02.169 "data_offset": 0, 00:36:02.169 "data_size": 65536 00:36:02.169 }, 00:36:02.169 { 00:36:02.169 "name": "BaseBdev3", 00:36:02.169 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:02.169 "is_configured": true, 00:36:02.169 "data_offset": 0, 00:36:02.169 "data_size": 65536 00:36:02.169 } 00:36:02.169 ] 00:36:02.169 }' 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:02.169 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.736 14:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.736 [2024-10-09 14:05:09.001217] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:02.736 "name": "Existed_Raid", 00:36:02.736 "aliases": [ 00:36:02.736 "a0c34752-75d4-463d-9e40-06de67ccdfbb" 00:36:02.736 ], 00:36:02.736 "product_name": "Raid Volume", 00:36:02.736 "block_size": 512, 00:36:02.736 "num_blocks": 131072, 00:36:02.736 "uuid": "a0c34752-75d4-463d-9e40-06de67ccdfbb", 00:36:02.736 "assigned_rate_limits": { 00:36:02.736 "rw_ios_per_sec": 0, 00:36:02.736 "rw_mbytes_per_sec": 0, 00:36:02.736 "r_mbytes_per_sec": 0, 00:36:02.736 "w_mbytes_per_sec": 0 00:36:02.736 }, 00:36:02.736 "claimed": false, 00:36:02.736 "zoned": false, 00:36:02.736 "supported_io_types": { 00:36:02.736 "read": true, 00:36:02.736 "write": true, 00:36:02.736 "unmap": false, 00:36:02.736 "flush": false, 00:36:02.736 "reset": true, 00:36:02.736 "nvme_admin": false, 00:36:02.736 "nvme_io": false, 00:36:02.736 "nvme_io_md": false, 00:36:02.736 "write_zeroes": true, 00:36:02.736 "zcopy": false, 00:36:02.736 "get_zone_info": false, 00:36:02.736 "zone_management": false, 00:36:02.736 "zone_append": false, 00:36:02.736 "compare": false, 00:36:02.736 "compare_and_write": false, 00:36:02.736 "abort": false, 00:36:02.736 "seek_hole": false, 00:36:02.736 "seek_data": false, 00:36:02.736 "copy": false, 00:36:02.736 "nvme_iov_md": false 00:36:02.736 }, 00:36:02.736 "driver_specific": { 00:36:02.736 "raid": { 00:36:02.736 "uuid": "a0c34752-75d4-463d-9e40-06de67ccdfbb", 00:36:02.736 "strip_size_kb": 64, 00:36:02.736 "state": "online", 00:36:02.736 "raid_level": "raid5f", 00:36:02.736 "superblock": false, 00:36:02.736 "num_base_bdevs": 3, 00:36:02.736 "num_base_bdevs_discovered": 3, 00:36:02.736 "num_base_bdevs_operational": 3, 00:36:02.736 "base_bdevs_list": [ 00:36:02.736 { 00:36:02.736 "name": "NewBaseBdev", 00:36:02.736 "uuid": "502e041d-4708-4ae6-bc38-56eb051a12b0", 00:36:02.736 "is_configured": true, 00:36:02.736 "data_offset": 0, 00:36:02.736 "data_size": 65536 00:36:02.736 }, 00:36:02.736 { 00:36:02.736 "name": "BaseBdev2", 00:36:02.736 "uuid": "317b9d0c-e55a-4836-8510-7b6ff6db5fe8", 00:36:02.736 "is_configured": true, 00:36:02.736 "data_offset": 0, 00:36:02.736 "data_size": 65536 00:36:02.736 }, 00:36:02.736 { 00:36:02.736 "name": "BaseBdev3", 00:36:02.736 "uuid": "4f67e581-0172-4ba5-be90-9265ceeb0955", 00:36:02.736 "is_configured": true, 00:36:02.736 "data_offset": 0, 00:36:02.736 "data_size": 65536 00:36:02.736 } 00:36:02.736 ] 00:36:02.736 } 00:36:02.736 } 00:36:02.736 }' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:02.736 BaseBdev2 00:36:02.736 BaseBdev3' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.736 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.737 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:02.996 [2024-10-09 14:05:09.293098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:02.996 [2024-10-09 14:05:09.293132] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:02.996 [2024-10-09 14:05:09.293218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:02.996 [2024-10-09 14:05:09.293523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:02.996 [2024-10-09 14:05:09.293568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90872 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90872 ']' 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90872 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90872 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:02.996 killing process with pid 90872 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90872' 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90872 00:36:02.996 [2024-10-09 14:05:09.335353] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:02.996 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90872 00:36:02.996 [2024-10-09 14:05:09.367066] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:36:03.255 00:36:03.255 real 0m9.279s 00:36:03.255 user 0m15.934s 00:36:03.255 sys 0m2.045s 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:03.255 ************************************ 00:36:03.255 END TEST raid5f_state_function_test 00:36:03.255 ************************************ 00:36:03.255 14:05:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:36:03.255 14:05:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:36:03.255 14:05:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.255 14:05:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:03.255 ************************************ 00:36:03.255 START TEST raid5f_state_function_test_sb 00:36:03.255 ************************************ 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91478 00:36:03.255 Process raid pid: 91478 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91478' 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91478 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91478 ']' 00:36:03.255 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.256 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:03.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.256 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.256 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:03.256 14:05:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:03.514 [2024-10-09 14:05:09.805760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:03.514 [2024-10-09 14:05:09.805979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.514 [2024-10-09 14:05:09.985108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.514 [2024-10-09 14:05:10.034564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.773 [2024-10-09 14:05:10.078663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:03.773 [2024-10-09 14:05:10.078709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.341 [2024-10-09 14:05:10.806315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:04.341 [2024-10-09 14:05:10.806363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:04.341 [2024-10-09 14:05:10.806380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:04.341 [2024-10-09 14:05:10.806393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:04.341 [2024-10-09 14:05:10.806401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:04.341 [2024-10-09 14:05:10.806418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.341 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:04.341 "name": "Existed_Raid", 00:36:04.341 "uuid": "5a0ca0e8-ff69-49c4-864b-58bd76a84973", 00:36:04.341 "strip_size_kb": 64, 00:36:04.341 "state": "configuring", 00:36:04.341 "raid_level": "raid5f", 00:36:04.341 "superblock": true, 00:36:04.341 "num_base_bdevs": 3, 00:36:04.342 "num_base_bdevs_discovered": 0, 00:36:04.342 "num_base_bdevs_operational": 3, 00:36:04.342 "base_bdevs_list": [ 00:36:04.342 { 00:36:04.342 "name": "BaseBdev1", 00:36:04.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.342 "is_configured": false, 00:36:04.342 "data_offset": 0, 00:36:04.342 "data_size": 0 00:36:04.342 }, 00:36:04.342 { 00:36:04.342 "name": "BaseBdev2", 00:36:04.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.342 "is_configured": false, 00:36:04.342 "data_offset": 0, 00:36:04.342 "data_size": 0 00:36:04.342 }, 00:36:04.342 { 00:36:04.342 "name": "BaseBdev3", 00:36:04.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.342 "is_configured": false, 00:36:04.342 "data_offset": 0, 00:36:04.342 "data_size": 0 00:36:04.342 } 00:36:04.342 ] 00:36:04.342 }' 00:36:04.342 14:05:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:04.342 14:05:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 [2024-10-09 14:05:11.266329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:04.910 [2024-10-09 14:05:11.266381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 [2024-10-09 14:05:11.274366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:04.910 [2024-10-09 14:05:11.274411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:04.910 [2024-10-09 14:05:11.274421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:04.910 [2024-10-09 14:05:11.274433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:04.910 [2024-10-09 14:05:11.274441] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:04.910 [2024-10-09 14:05:11.274453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 [2024-10-09 14:05:11.292031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:04.910 BaseBdev1 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 [ 00:36:04.910 { 00:36:04.910 "name": "BaseBdev1", 00:36:04.910 "aliases": [ 00:36:04.910 "e42742d1-78f0-413d-bbed-b29bcac5d833" 00:36:04.910 ], 00:36:04.910 "product_name": "Malloc disk", 00:36:04.910 "block_size": 512, 00:36:04.910 "num_blocks": 65536, 00:36:04.910 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:04.910 "assigned_rate_limits": { 00:36:04.910 "rw_ios_per_sec": 0, 00:36:04.910 "rw_mbytes_per_sec": 0, 00:36:04.910 "r_mbytes_per_sec": 0, 00:36:04.910 "w_mbytes_per_sec": 0 00:36:04.910 }, 00:36:04.910 "claimed": true, 00:36:04.910 "claim_type": "exclusive_write", 00:36:04.910 "zoned": false, 00:36:04.910 "supported_io_types": { 00:36:04.910 "read": true, 00:36:04.910 "write": true, 00:36:04.910 "unmap": true, 00:36:04.910 "flush": true, 00:36:04.910 "reset": true, 00:36:04.910 "nvme_admin": false, 00:36:04.910 "nvme_io": false, 00:36:04.910 "nvme_io_md": false, 00:36:04.910 "write_zeroes": true, 00:36:04.910 "zcopy": true, 00:36:04.910 "get_zone_info": false, 00:36:04.910 "zone_management": false, 00:36:04.910 "zone_append": false, 00:36:04.910 "compare": false, 00:36:04.910 "compare_and_write": false, 00:36:04.910 "abort": true, 00:36:04.910 "seek_hole": false, 00:36:04.910 "seek_data": false, 00:36:04.910 "copy": true, 00:36:04.910 "nvme_iov_md": false 00:36:04.910 }, 00:36:04.910 "memory_domains": [ 00:36:04.910 { 00:36:04.910 "dma_device_id": "system", 00:36:04.910 "dma_device_type": 1 00:36:04.910 }, 00:36:04.910 { 00:36:04.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:04.910 "dma_device_type": 2 00:36:04.910 } 00:36:04.910 ], 00:36:04.910 "driver_specific": {} 00:36:04.910 } 00:36:04.910 ] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.910 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:04.911 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:04.911 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:04.911 "name": "Existed_Raid", 00:36:04.911 "uuid": "30e3306e-33ad-4ad1-8cd8-679fd66999dd", 00:36:04.911 "strip_size_kb": 64, 00:36:04.911 "state": "configuring", 00:36:04.911 "raid_level": "raid5f", 00:36:04.911 "superblock": true, 00:36:04.911 "num_base_bdevs": 3, 00:36:04.911 "num_base_bdevs_discovered": 1, 00:36:04.911 "num_base_bdevs_operational": 3, 00:36:04.911 "base_bdevs_list": [ 00:36:04.911 { 00:36:04.911 "name": "BaseBdev1", 00:36:04.911 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:04.911 "is_configured": true, 00:36:04.911 "data_offset": 2048, 00:36:04.911 "data_size": 63488 00:36:04.911 }, 00:36:04.911 { 00:36:04.911 "name": "BaseBdev2", 00:36:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.911 "is_configured": false, 00:36:04.911 "data_offset": 0, 00:36:04.911 "data_size": 0 00:36:04.911 }, 00:36:04.911 { 00:36:04.911 "name": "BaseBdev3", 00:36:04.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.911 "is_configured": false, 00:36:04.911 "data_offset": 0, 00:36:04.911 "data_size": 0 00:36:04.911 } 00:36:04.911 ] 00:36:04.911 }' 00:36:04.911 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:04.911 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.477 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:05.477 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.477 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.477 [2024-10-09 14:05:11.776210] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:05.477 [2024-10-09 14:05:11.776288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:36:05.477 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.478 [2024-10-09 14:05:11.784268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:05.478 [2024-10-09 14:05:11.786748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:05.478 [2024-10-09 14:05:11.786791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:05.478 [2024-10-09 14:05:11.786803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:05.478 [2024-10-09 14:05:11.786819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:05.478 "name": "Existed_Raid", 00:36:05.478 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:05.478 "strip_size_kb": 64, 00:36:05.478 "state": "configuring", 00:36:05.478 "raid_level": "raid5f", 00:36:05.478 "superblock": true, 00:36:05.478 "num_base_bdevs": 3, 00:36:05.478 "num_base_bdevs_discovered": 1, 00:36:05.478 "num_base_bdevs_operational": 3, 00:36:05.478 "base_bdevs_list": [ 00:36:05.478 { 00:36:05.478 "name": "BaseBdev1", 00:36:05.478 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:05.478 "is_configured": true, 00:36:05.478 "data_offset": 2048, 00:36:05.478 "data_size": 63488 00:36:05.478 }, 00:36:05.478 { 00:36:05.478 "name": "BaseBdev2", 00:36:05.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.478 "is_configured": false, 00:36:05.478 "data_offset": 0, 00:36:05.478 "data_size": 0 00:36:05.478 }, 00:36:05.478 { 00:36:05.478 "name": "BaseBdev3", 00:36:05.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.478 "is_configured": false, 00:36:05.478 "data_offset": 0, 00:36:05.478 "data_size": 0 00:36:05.478 } 00:36:05.478 ] 00:36:05.478 }' 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:05.478 14:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.736 [2024-10-09 14:05:12.273020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:05.736 BaseBdev2 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.736 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.994 [ 00:36:05.994 { 00:36:05.994 "name": "BaseBdev2", 00:36:05.994 "aliases": [ 00:36:05.994 "5cd338f0-6fae-4632-acbf-f397a6d4fdfb" 00:36:05.994 ], 00:36:05.994 "product_name": "Malloc disk", 00:36:05.994 "block_size": 512, 00:36:05.994 "num_blocks": 65536, 00:36:05.994 "uuid": "5cd338f0-6fae-4632-acbf-f397a6d4fdfb", 00:36:05.994 "assigned_rate_limits": { 00:36:05.994 "rw_ios_per_sec": 0, 00:36:05.994 "rw_mbytes_per_sec": 0, 00:36:05.994 "r_mbytes_per_sec": 0, 00:36:05.994 "w_mbytes_per_sec": 0 00:36:05.994 }, 00:36:05.994 "claimed": true, 00:36:05.994 "claim_type": "exclusive_write", 00:36:05.994 "zoned": false, 00:36:05.994 "supported_io_types": { 00:36:05.994 "read": true, 00:36:05.994 "write": true, 00:36:05.994 "unmap": true, 00:36:05.994 "flush": true, 00:36:05.994 "reset": true, 00:36:05.994 "nvme_admin": false, 00:36:05.994 "nvme_io": false, 00:36:05.994 "nvme_io_md": false, 00:36:05.994 "write_zeroes": true, 00:36:05.994 "zcopy": true, 00:36:05.994 "get_zone_info": false, 00:36:05.994 "zone_management": false, 00:36:05.994 "zone_append": false, 00:36:05.994 "compare": false, 00:36:05.994 "compare_and_write": false, 00:36:05.994 "abort": true, 00:36:05.994 "seek_hole": false, 00:36:05.994 "seek_data": false, 00:36:05.994 "copy": true, 00:36:05.994 "nvme_iov_md": false 00:36:05.994 }, 00:36:05.994 "memory_domains": [ 00:36:05.994 { 00:36:05.994 "dma_device_id": "system", 00:36:05.994 "dma_device_type": 1 00:36:05.994 }, 00:36:05.994 { 00:36:05.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:05.994 "dma_device_type": 2 00:36:05.994 } 00:36:05.994 ], 00:36:05.994 "driver_specific": {} 00:36:05.994 } 00:36:05.994 ] 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.994 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:05.994 "name": "Existed_Raid", 00:36:05.994 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:05.994 "strip_size_kb": 64, 00:36:05.994 "state": "configuring", 00:36:05.994 "raid_level": "raid5f", 00:36:05.994 "superblock": true, 00:36:05.994 "num_base_bdevs": 3, 00:36:05.995 "num_base_bdevs_discovered": 2, 00:36:05.995 "num_base_bdevs_operational": 3, 00:36:05.995 "base_bdevs_list": [ 00:36:05.995 { 00:36:05.995 "name": "BaseBdev1", 00:36:05.995 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:05.995 "is_configured": true, 00:36:05.995 "data_offset": 2048, 00:36:05.995 "data_size": 63488 00:36:05.995 }, 00:36:05.995 { 00:36:05.995 "name": "BaseBdev2", 00:36:05.995 "uuid": "5cd338f0-6fae-4632-acbf-f397a6d4fdfb", 00:36:05.995 "is_configured": true, 00:36:05.995 "data_offset": 2048, 00:36:05.995 "data_size": 63488 00:36:05.995 }, 00:36:05.995 { 00:36:05.995 "name": "BaseBdev3", 00:36:05.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.995 "is_configured": false, 00:36:05.995 "data_offset": 0, 00:36:05.995 "data_size": 0 00:36:05.995 } 00:36:05.995 ] 00:36:05.995 }' 00:36:05.995 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:05.995 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.293 [2024-10-09 14:05:12.764860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:06.293 [2024-10-09 14:05:12.765081] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:36:06.293 [2024-10-09 14:05:12.765105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:06.293 BaseBdev3 00:36:06.293 [2024-10-09 14:05:12.765438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:36:06.293 [2024-10-09 14:05:12.765919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:36:06.293 [2024-10-09 14:05:12.765941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.293 [2024-10-09 14:05:12.766067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.293 [ 00:36:06.293 { 00:36:06.293 "name": "BaseBdev3", 00:36:06.293 "aliases": [ 00:36:06.293 "22e7d4df-59a4-45cc-bde9-740041a73f87" 00:36:06.293 ], 00:36:06.293 "product_name": "Malloc disk", 00:36:06.293 "block_size": 512, 00:36:06.293 "num_blocks": 65536, 00:36:06.293 "uuid": "22e7d4df-59a4-45cc-bde9-740041a73f87", 00:36:06.293 "assigned_rate_limits": { 00:36:06.293 "rw_ios_per_sec": 0, 00:36:06.293 "rw_mbytes_per_sec": 0, 00:36:06.293 "r_mbytes_per_sec": 0, 00:36:06.293 "w_mbytes_per_sec": 0 00:36:06.293 }, 00:36:06.293 "claimed": true, 00:36:06.293 "claim_type": "exclusive_write", 00:36:06.293 "zoned": false, 00:36:06.293 "supported_io_types": { 00:36:06.293 "read": true, 00:36:06.293 "write": true, 00:36:06.293 "unmap": true, 00:36:06.293 "flush": true, 00:36:06.293 "reset": true, 00:36:06.293 "nvme_admin": false, 00:36:06.293 "nvme_io": false, 00:36:06.293 "nvme_io_md": false, 00:36:06.293 "write_zeroes": true, 00:36:06.293 "zcopy": true, 00:36:06.293 "get_zone_info": false, 00:36:06.293 "zone_management": false, 00:36:06.293 "zone_append": false, 00:36:06.293 "compare": false, 00:36:06.293 "compare_and_write": false, 00:36:06.293 "abort": true, 00:36:06.293 "seek_hole": false, 00:36:06.293 "seek_data": false, 00:36:06.293 "copy": true, 00:36:06.293 "nvme_iov_md": false 00:36:06.293 }, 00:36:06.293 "memory_domains": [ 00:36:06.293 { 00:36:06.293 "dma_device_id": "system", 00:36:06.293 "dma_device_type": 1 00:36:06.293 }, 00:36:06.293 { 00:36:06.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.293 "dma_device_type": 2 00:36:06.293 } 00:36:06.293 ], 00:36:06.293 "driver_specific": {} 00:36:06.293 } 00:36:06.293 ] 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.293 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.574 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.574 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:06.574 "name": "Existed_Raid", 00:36:06.574 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:06.574 "strip_size_kb": 64, 00:36:06.574 "state": "online", 00:36:06.574 "raid_level": "raid5f", 00:36:06.574 "superblock": true, 00:36:06.574 "num_base_bdevs": 3, 00:36:06.574 "num_base_bdevs_discovered": 3, 00:36:06.574 "num_base_bdevs_operational": 3, 00:36:06.574 "base_bdevs_list": [ 00:36:06.574 { 00:36:06.574 "name": "BaseBdev1", 00:36:06.574 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:06.574 "is_configured": true, 00:36:06.574 "data_offset": 2048, 00:36:06.574 "data_size": 63488 00:36:06.574 }, 00:36:06.574 { 00:36:06.574 "name": "BaseBdev2", 00:36:06.574 "uuid": "5cd338f0-6fae-4632-acbf-f397a6d4fdfb", 00:36:06.574 "is_configured": true, 00:36:06.574 "data_offset": 2048, 00:36:06.574 "data_size": 63488 00:36:06.574 }, 00:36:06.574 { 00:36:06.574 "name": "BaseBdev3", 00:36:06.574 "uuid": "22e7d4df-59a4-45cc-bde9-740041a73f87", 00:36:06.574 "is_configured": true, 00:36:06.574 "data_offset": 2048, 00:36:06.574 "data_size": 63488 00:36:06.574 } 00:36:06.574 ] 00:36:06.574 }' 00:36:06.574 14:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:06.574 14:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.833 [2024-10-09 14:05:13.257268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:06.833 "name": "Existed_Raid", 00:36:06.833 "aliases": [ 00:36:06.833 "b28c0a41-09ab-4dd5-8197-c81bab23eb2e" 00:36:06.833 ], 00:36:06.833 "product_name": "Raid Volume", 00:36:06.833 "block_size": 512, 00:36:06.833 "num_blocks": 126976, 00:36:06.833 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:06.833 "assigned_rate_limits": { 00:36:06.833 "rw_ios_per_sec": 0, 00:36:06.833 "rw_mbytes_per_sec": 0, 00:36:06.833 "r_mbytes_per_sec": 0, 00:36:06.833 "w_mbytes_per_sec": 0 00:36:06.833 }, 00:36:06.833 "claimed": false, 00:36:06.833 "zoned": false, 00:36:06.833 "supported_io_types": { 00:36:06.833 "read": true, 00:36:06.833 "write": true, 00:36:06.833 "unmap": false, 00:36:06.833 "flush": false, 00:36:06.833 "reset": true, 00:36:06.833 "nvme_admin": false, 00:36:06.833 "nvme_io": false, 00:36:06.833 "nvme_io_md": false, 00:36:06.833 "write_zeroes": true, 00:36:06.833 "zcopy": false, 00:36:06.833 "get_zone_info": false, 00:36:06.833 "zone_management": false, 00:36:06.833 "zone_append": false, 00:36:06.833 "compare": false, 00:36:06.833 "compare_and_write": false, 00:36:06.833 "abort": false, 00:36:06.833 "seek_hole": false, 00:36:06.833 "seek_data": false, 00:36:06.833 "copy": false, 00:36:06.833 "nvme_iov_md": false 00:36:06.833 }, 00:36:06.833 "driver_specific": { 00:36:06.833 "raid": { 00:36:06.833 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:06.833 "strip_size_kb": 64, 00:36:06.833 "state": "online", 00:36:06.833 "raid_level": "raid5f", 00:36:06.833 "superblock": true, 00:36:06.833 "num_base_bdevs": 3, 00:36:06.833 "num_base_bdevs_discovered": 3, 00:36:06.833 "num_base_bdevs_operational": 3, 00:36:06.833 "base_bdevs_list": [ 00:36:06.833 { 00:36:06.833 "name": "BaseBdev1", 00:36:06.833 "uuid": "e42742d1-78f0-413d-bbed-b29bcac5d833", 00:36:06.833 "is_configured": true, 00:36:06.833 "data_offset": 2048, 00:36:06.833 "data_size": 63488 00:36:06.833 }, 00:36:06.833 { 00:36:06.833 "name": "BaseBdev2", 00:36:06.833 "uuid": "5cd338f0-6fae-4632-acbf-f397a6d4fdfb", 00:36:06.833 "is_configured": true, 00:36:06.833 "data_offset": 2048, 00:36:06.833 "data_size": 63488 00:36:06.833 }, 00:36:06.833 { 00:36:06.833 "name": "BaseBdev3", 00:36:06.833 "uuid": "22e7d4df-59a4-45cc-bde9-740041a73f87", 00:36:06.833 "is_configured": true, 00:36:06.833 "data_offset": 2048, 00:36:06.833 "data_size": 63488 00:36:06.833 } 00:36:06.833 ] 00:36:06.833 } 00:36:06.833 } 00:36:06.833 }' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:06.833 BaseBdev2 00:36:06.833 BaseBdev3' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:06.833 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.093 [2024-10-09 14:05:13.533183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:07.093 "name": "Existed_Raid", 00:36:07.093 "uuid": "b28c0a41-09ab-4dd5-8197-c81bab23eb2e", 00:36:07.093 "strip_size_kb": 64, 00:36:07.093 "state": "online", 00:36:07.093 "raid_level": "raid5f", 00:36:07.093 "superblock": true, 00:36:07.093 "num_base_bdevs": 3, 00:36:07.093 "num_base_bdevs_discovered": 2, 00:36:07.093 "num_base_bdevs_operational": 2, 00:36:07.093 "base_bdevs_list": [ 00:36:07.093 { 00:36:07.093 "name": null, 00:36:07.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.093 "is_configured": false, 00:36:07.093 "data_offset": 0, 00:36:07.093 "data_size": 63488 00:36:07.093 }, 00:36:07.093 { 00:36:07.093 "name": "BaseBdev2", 00:36:07.093 "uuid": "5cd338f0-6fae-4632-acbf-f397a6d4fdfb", 00:36:07.093 "is_configured": true, 00:36:07.093 "data_offset": 2048, 00:36:07.093 "data_size": 63488 00:36:07.093 }, 00:36:07.093 { 00:36:07.093 "name": "BaseBdev3", 00:36:07.093 "uuid": "22e7d4df-59a4-45cc-bde9-740041a73f87", 00:36:07.093 "is_configured": true, 00:36:07.093 "data_offset": 2048, 00:36:07.093 "data_size": 63488 00:36:07.093 } 00:36:07.093 ] 00:36:07.093 }' 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:07.093 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:07.660 14:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 [2024-10-09 14:05:14.051104] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:07.660 [2024-10-09 14:05:14.051437] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:07.660 [2024-10-09 14:05:14.068235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 [2024-10-09 14:05:14.120299] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:07.660 [2024-10-09 14:05:14.120364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 BaseBdev2 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.660 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.918 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:07.918 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.918 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.918 [ 00:36:07.918 { 00:36:07.918 "name": "BaseBdev2", 00:36:07.918 "aliases": [ 00:36:07.918 "bac9cd65-ebfc-471b-8f88-80b10c7c7481" 00:36:07.918 ], 00:36:07.918 "product_name": "Malloc disk", 00:36:07.918 "block_size": 512, 00:36:07.918 "num_blocks": 65536, 00:36:07.918 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:07.918 "assigned_rate_limits": { 00:36:07.918 "rw_ios_per_sec": 0, 00:36:07.918 "rw_mbytes_per_sec": 0, 00:36:07.918 "r_mbytes_per_sec": 0, 00:36:07.918 "w_mbytes_per_sec": 0 00:36:07.918 }, 00:36:07.918 "claimed": false, 00:36:07.918 "zoned": false, 00:36:07.918 "supported_io_types": { 00:36:07.918 "read": true, 00:36:07.918 "write": true, 00:36:07.918 "unmap": true, 00:36:07.918 "flush": true, 00:36:07.918 "reset": true, 00:36:07.918 "nvme_admin": false, 00:36:07.918 "nvme_io": false, 00:36:07.918 "nvme_io_md": false, 00:36:07.918 "write_zeroes": true, 00:36:07.918 "zcopy": true, 00:36:07.918 "get_zone_info": false, 00:36:07.918 "zone_management": false, 00:36:07.918 "zone_append": false, 00:36:07.918 "compare": false, 00:36:07.918 "compare_and_write": false, 00:36:07.918 "abort": true, 00:36:07.918 "seek_hole": false, 00:36:07.918 "seek_data": false, 00:36:07.918 "copy": true, 00:36:07.918 "nvme_iov_md": false 00:36:07.918 }, 00:36:07.918 "memory_domains": [ 00:36:07.918 { 00:36:07.918 "dma_device_id": "system", 00:36:07.918 "dma_device_type": 1 00:36:07.918 }, 00:36:07.918 { 00:36:07.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:07.918 "dma_device_type": 2 00:36:07.919 } 00:36:07.919 ], 00:36:07.919 "driver_specific": {} 00:36:07.919 } 00:36:07.919 ] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.919 BaseBdev3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.919 [ 00:36:07.919 { 00:36:07.919 "name": "BaseBdev3", 00:36:07.919 "aliases": [ 00:36:07.919 "f6c01313-1f7f-46cf-b203-cce27e48c33d" 00:36:07.919 ], 00:36:07.919 "product_name": "Malloc disk", 00:36:07.919 "block_size": 512, 00:36:07.919 "num_blocks": 65536, 00:36:07.919 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:07.919 "assigned_rate_limits": { 00:36:07.919 "rw_ios_per_sec": 0, 00:36:07.919 "rw_mbytes_per_sec": 0, 00:36:07.919 "r_mbytes_per_sec": 0, 00:36:07.919 "w_mbytes_per_sec": 0 00:36:07.919 }, 00:36:07.919 "claimed": false, 00:36:07.919 "zoned": false, 00:36:07.919 "supported_io_types": { 00:36:07.919 "read": true, 00:36:07.919 "write": true, 00:36:07.919 "unmap": true, 00:36:07.919 "flush": true, 00:36:07.919 "reset": true, 00:36:07.919 "nvme_admin": false, 00:36:07.919 "nvme_io": false, 00:36:07.919 "nvme_io_md": false, 00:36:07.919 "write_zeroes": true, 00:36:07.919 "zcopy": true, 00:36:07.919 "get_zone_info": false, 00:36:07.919 "zone_management": false, 00:36:07.919 "zone_append": false, 00:36:07.919 "compare": false, 00:36:07.919 "compare_and_write": false, 00:36:07.919 "abort": true, 00:36:07.919 "seek_hole": false, 00:36:07.919 "seek_data": false, 00:36:07.919 "copy": true, 00:36:07.919 "nvme_iov_md": false 00:36:07.919 }, 00:36:07.919 "memory_domains": [ 00:36:07.919 { 00:36:07.919 "dma_device_id": "system", 00:36:07.919 "dma_device_type": 1 00:36:07.919 }, 00:36:07.919 { 00:36:07.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:07.919 "dma_device_type": 2 00:36:07.919 } 00:36:07.919 ], 00:36:07.919 "driver_specific": {} 00:36:07.919 } 00:36:07.919 ] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.919 [2024-10-09 14:05:14.284217] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:07.919 [2024-10-09 14:05:14.284261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:07.919 [2024-10-09 14:05:14.284284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:07.919 [2024-10-09 14:05:14.286771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:07.919 "name": "Existed_Raid", 00:36:07.919 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:07.919 "strip_size_kb": 64, 00:36:07.919 "state": "configuring", 00:36:07.919 "raid_level": "raid5f", 00:36:07.919 "superblock": true, 00:36:07.919 "num_base_bdevs": 3, 00:36:07.919 "num_base_bdevs_discovered": 2, 00:36:07.919 "num_base_bdevs_operational": 3, 00:36:07.919 "base_bdevs_list": [ 00:36:07.919 { 00:36:07.919 "name": "BaseBdev1", 00:36:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.919 "is_configured": false, 00:36:07.919 "data_offset": 0, 00:36:07.919 "data_size": 0 00:36:07.919 }, 00:36:07.919 { 00:36:07.919 "name": "BaseBdev2", 00:36:07.919 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:07.919 "is_configured": true, 00:36:07.919 "data_offset": 2048, 00:36:07.919 "data_size": 63488 00:36:07.919 }, 00:36:07.919 { 00:36:07.919 "name": "BaseBdev3", 00:36:07.919 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:07.919 "is_configured": true, 00:36:07.919 "data_offset": 2048, 00:36:07.919 "data_size": 63488 00:36:07.919 } 00:36:07.919 ] 00:36:07.919 }' 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:07.919 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.486 [2024-10-09 14:05:14.740405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:08.486 "name": "Existed_Raid", 00:36:08.486 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:08.486 "strip_size_kb": 64, 00:36:08.486 "state": "configuring", 00:36:08.486 "raid_level": "raid5f", 00:36:08.486 "superblock": true, 00:36:08.486 "num_base_bdevs": 3, 00:36:08.486 "num_base_bdevs_discovered": 1, 00:36:08.486 "num_base_bdevs_operational": 3, 00:36:08.486 "base_bdevs_list": [ 00:36:08.486 { 00:36:08.486 "name": "BaseBdev1", 00:36:08.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:08.486 "is_configured": false, 00:36:08.486 "data_offset": 0, 00:36:08.486 "data_size": 0 00:36:08.486 }, 00:36:08.486 { 00:36:08.486 "name": null, 00:36:08.486 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:08.486 "is_configured": false, 00:36:08.486 "data_offset": 0, 00:36:08.486 "data_size": 63488 00:36:08.486 }, 00:36:08.486 { 00:36:08.486 "name": "BaseBdev3", 00:36:08.486 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:08.486 "is_configured": true, 00:36:08.486 "data_offset": 2048, 00:36:08.486 "data_size": 63488 00:36:08.486 } 00:36:08.486 ] 00:36:08.486 }' 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:08.486 14:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.746 [2024-10-09 14:05:15.247455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:08.746 BaseBdev1 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.746 [ 00:36:08.746 { 00:36:08.746 "name": "BaseBdev1", 00:36:08.746 "aliases": [ 00:36:08.746 "6a1f268a-3860-4a5f-ad96-154ae7ae2698" 00:36:08.746 ], 00:36:08.746 "product_name": "Malloc disk", 00:36:08.746 "block_size": 512, 00:36:08.746 "num_blocks": 65536, 00:36:08.746 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:08.746 "assigned_rate_limits": { 00:36:08.746 "rw_ios_per_sec": 0, 00:36:08.746 "rw_mbytes_per_sec": 0, 00:36:08.746 "r_mbytes_per_sec": 0, 00:36:08.746 "w_mbytes_per_sec": 0 00:36:08.746 }, 00:36:08.746 "claimed": true, 00:36:08.746 "claim_type": "exclusive_write", 00:36:08.746 "zoned": false, 00:36:08.746 "supported_io_types": { 00:36:08.746 "read": true, 00:36:08.746 "write": true, 00:36:08.746 "unmap": true, 00:36:08.746 "flush": true, 00:36:08.746 "reset": true, 00:36:08.746 "nvme_admin": false, 00:36:08.746 "nvme_io": false, 00:36:08.746 "nvme_io_md": false, 00:36:08.746 "write_zeroes": true, 00:36:08.746 "zcopy": true, 00:36:08.746 "get_zone_info": false, 00:36:08.746 "zone_management": false, 00:36:08.746 "zone_append": false, 00:36:08.746 "compare": false, 00:36:08.746 "compare_and_write": false, 00:36:08.746 "abort": true, 00:36:08.746 "seek_hole": false, 00:36:08.746 "seek_data": false, 00:36:08.746 "copy": true, 00:36:08.746 "nvme_iov_md": false 00:36:08.746 }, 00:36:08.746 "memory_domains": [ 00:36:08.746 { 00:36:08.746 "dma_device_id": "system", 00:36:08.746 "dma_device_type": 1 00:36:08.746 }, 00:36:08.746 { 00:36:08.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:08.746 "dma_device_type": 2 00:36:08.746 } 00:36:08.746 ], 00:36:08.746 "driver_specific": {} 00:36:08.746 } 00:36:08.746 ] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:08.746 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.747 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.005 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.005 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.005 "name": "Existed_Raid", 00:36:09.005 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:09.005 "strip_size_kb": 64, 00:36:09.005 "state": "configuring", 00:36:09.005 "raid_level": "raid5f", 00:36:09.005 "superblock": true, 00:36:09.005 "num_base_bdevs": 3, 00:36:09.005 "num_base_bdevs_discovered": 2, 00:36:09.005 "num_base_bdevs_operational": 3, 00:36:09.005 "base_bdevs_list": [ 00:36:09.005 { 00:36:09.005 "name": "BaseBdev1", 00:36:09.005 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:09.005 "is_configured": true, 00:36:09.005 "data_offset": 2048, 00:36:09.005 "data_size": 63488 00:36:09.005 }, 00:36:09.005 { 00:36:09.005 "name": null, 00:36:09.005 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:09.005 "is_configured": false, 00:36:09.005 "data_offset": 0, 00:36:09.005 "data_size": 63488 00:36:09.005 }, 00:36:09.005 { 00:36:09.005 "name": "BaseBdev3", 00:36:09.005 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:09.005 "is_configured": true, 00:36:09.005 "data_offset": 2048, 00:36:09.005 "data_size": 63488 00:36:09.005 } 00:36:09.005 ] 00:36:09.005 }' 00:36:09.005 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.005 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.264 [2024-10-09 14:05:15.751643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.264 "name": "Existed_Raid", 00:36:09.264 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:09.264 "strip_size_kb": 64, 00:36:09.264 "state": "configuring", 00:36:09.264 "raid_level": "raid5f", 00:36:09.264 "superblock": true, 00:36:09.264 "num_base_bdevs": 3, 00:36:09.264 "num_base_bdevs_discovered": 1, 00:36:09.264 "num_base_bdevs_operational": 3, 00:36:09.264 "base_bdevs_list": [ 00:36:09.264 { 00:36:09.264 "name": "BaseBdev1", 00:36:09.264 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:09.264 "is_configured": true, 00:36:09.264 "data_offset": 2048, 00:36:09.264 "data_size": 63488 00:36:09.264 }, 00:36:09.264 { 00:36:09.264 "name": null, 00:36:09.264 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:09.264 "is_configured": false, 00:36:09.264 "data_offset": 0, 00:36:09.264 "data_size": 63488 00:36:09.264 }, 00:36:09.264 { 00:36:09.264 "name": null, 00:36:09.264 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:09.264 "is_configured": false, 00:36:09.264 "data_offset": 0, 00:36:09.264 "data_size": 63488 00:36:09.264 } 00:36:09.264 ] 00:36:09.264 }' 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.264 14:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.831 [2024-10-09 14:05:16.239804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.831 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.831 "name": "Existed_Raid", 00:36:09.831 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:09.831 "strip_size_kb": 64, 00:36:09.831 "state": "configuring", 00:36:09.831 "raid_level": "raid5f", 00:36:09.831 "superblock": true, 00:36:09.831 "num_base_bdevs": 3, 00:36:09.831 "num_base_bdevs_discovered": 2, 00:36:09.831 "num_base_bdevs_operational": 3, 00:36:09.831 "base_bdevs_list": [ 00:36:09.831 { 00:36:09.831 "name": "BaseBdev1", 00:36:09.831 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:09.831 "is_configured": true, 00:36:09.831 "data_offset": 2048, 00:36:09.831 "data_size": 63488 00:36:09.831 }, 00:36:09.831 { 00:36:09.831 "name": null, 00:36:09.831 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:09.831 "is_configured": false, 00:36:09.831 "data_offset": 0, 00:36:09.831 "data_size": 63488 00:36:09.831 }, 00:36:09.831 { 00:36:09.831 "name": "BaseBdev3", 00:36:09.831 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:09.831 "is_configured": true, 00:36:09.832 "data_offset": 2048, 00:36:09.832 "data_size": 63488 00:36:09.832 } 00:36:09.832 ] 00:36:09.832 }' 00:36:09.832 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.832 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.399 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.400 [2024-10-09 14:05:16.735886] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:10.400 "name": "Existed_Raid", 00:36:10.400 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:10.400 "strip_size_kb": 64, 00:36:10.400 "state": "configuring", 00:36:10.400 "raid_level": "raid5f", 00:36:10.400 "superblock": true, 00:36:10.400 "num_base_bdevs": 3, 00:36:10.400 "num_base_bdevs_discovered": 1, 00:36:10.400 "num_base_bdevs_operational": 3, 00:36:10.400 "base_bdevs_list": [ 00:36:10.400 { 00:36:10.400 "name": null, 00:36:10.400 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:10.400 "is_configured": false, 00:36:10.400 "data_offset": 0, 00:36:10.400 "data_size": 63488 00:36:10.400 }, 00:36:10.400 { 00:36:10.400 "name": null, 00:36:10.400 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:10.400 "is_configured": false, 00:36:10.400 "data_offset": 0, 00:36:10.400 "data_size": 63488 00:36:10.400 }, 00:36:10.400 { 00:36:10.400 "name": "BaseBdev3", 00:36:10.400 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:10.400 "is_configured": true, 00:36:10.400 "data_offset": 2048, 00:36:10.400 "data_size": 63488 00:36:10.400 } 00:36:10.400 ] 00:36:10.400 }' 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:10.400 14:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.658 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.658 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.658 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.659 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.917 [2024-10-09 14:05:17.234542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:10.917 "name": "Existed_Raid", 00:36:10.917 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:10.917 "strip_size_kb": 64, 00:36:10.917 "state": "configuring", 00:36:10.917 "raid_level": "raid5f", 00:36:10.917 "superblock": true, 00:36:10.917 "num_base_bdevs": 3, 00:36:10.917 "num_base_bdevs_discovered": 2, 00:36:10.917 "num_base_bdevs_operational": 3, 00:36:10.917 "base_bdevs_list": [ 00:36:10.917 { 00:36:10.917 "name": null, 00:36:10.917 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:10.917 "is_configured": false, 00:36:10.917 "data_offset": 0, 00:36:10.917 "data_size": 63488 00:36:10.917 }, 00:36:10.917 { 00:36:10.917 "name": "BaseBdev2", 00:36:10.917 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:10.917 "is_configured": true, 00:36:10.917 "data_offset": 2048, 00:36:10.917 "data_size": 63488 00:36:10.917 }, 00:36:10.917 { 00:36:10.917 "name": "BaseBdev3", 00:36:10.917 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:10.917 "is_configured": true, 00:36:10.917 "data_offset": 2048, 00:36:10.917 "data_size": 63488 00:36:10.917 } 00:36:10.917 ] 00:36:10.917 }' 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:10.917 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.175 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6a1f268a-3860-4a5f-ad96-154ae7ae2698 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 [2024-10-09 14:05:17.769711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:36:11.434 [2024-10-09 14:05:17.769882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:36:11.434 [2024-10-09 14:05:17.769901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:11.434 NewBaseBdev 00:36:11.434 [2024-10-09 14:05:17.770213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:36:11.434 [2024-10-09 14:05:17.770832] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:36:11.434 [2024-10-09 14:05:17.770852] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:36:11.434 [2024-10-09 14:05:17.770992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 [ 00:36:11.434 { 00:36:11.434 "name": "NewBaseBdev", 00:36:11.434 "aliases": [ 00:36:11.434 "6a1f268a-3860-4a5f-ad96-154ae7ae2698" 00:36:11.434 ], 00:36:11.434 "product_name": "Malloc disk", 00:36:11.434 "block_size": 512, 00:36:11.434 "num_blocks": 65536, 00:36:11.434 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:11.434 "assigned_rate_limits": { 00:36:11.434 "rw_ios_per_sec": 0, 00:36:11.434 "rw_mbytes_per_sec": 0, 00:36:11.434 "r_mbytes_per_sec": 0, 00:36:11.434 "w_mbytes_per_sec": 0 00:36:11.434 }, 00:36:11.434 "claimed": true, 00:36:11.434 "claim_type": "exclusive_write", 00:36:11.434 "zoned": false, 00:36:11.434 "supported_io_types": { 00:36:11.434 "read": true, 00:36:11.434 "write": true, 00:36:11.434 "unmap": true, 00:36:11.434 "flush": true, 00:36:11.434 "reset": true, 00:36:11.434 "nvme_admin": false, 00:36:11.434 "nvme_io": false, 00:36:11.434 "nvme_io_md": false, 00:36:11.434 "write_zeroes": true, 00:36:11.434 "zcopy": true, 00:36:11.434 "get_zone_info": false, 00:36:11.434 "zone_management": false, 00:36:11.434 "zone_append": false, 00:36:11.434 "compare": false, 00:36:11.434 "compare_and_write": false, 00:36:11.434 "abort": true, 00:36:11.434 "seek_hole": false, 00:36:11.434 "seek_data": false, 00:36:11.434 "copy": true, 00:36:11.434 "nvme_iov_md": false 00:36:11.434 }, 00:36:11.434 "memory_domains": [ 00:36:11.434 { 00:36:11.434 "dma_device_id": "system", 00:36:11.434 "dma_device_type": 1 00:36:11.434 }, 00:36:11.434 { 00:36:11.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:11.434 "dma_device_type": 2 00:36:11.434 } 00:36:11.434 ], 00:36:11.434 "driver_specific": {} 00:36:11.434 } 00:36:11.434 ] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:11.434 "name": "Existed_Raid", 00:36:11.434 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:11.434 "strip_size_kb": 64, 00:36:11.434 "state": "online", 00:36:11.434 "raid_level": "raid5f", 00:36:11.434 "superblock": true, 00:36:11.434 "num_base_bdevs": 3, 00:36:11.434 "num_base_bdevs_discovered": 3, 00:36:11.434 "num_base_bdevs_operational": 3, 00:36:11.434 "base_bdevs_list": [ 00:36:11.434 { 00:36:11.434 "name": "NewBaseBdev", 00:36:11.434 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:11.434 "is_configured": true, 00:36:11.434 "data_offset": 2048, 00:36:11.434 "data_size": 63488 00:36:11.434 }, 00:36:11.434 { 00:36:11.434 "name": "BaseBdev2", 00:36:11.434 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:11.434 "is_configured": true, 00:36:11.434 "data_offset": 2048, 00:36:11.434 "data_size": 63488 00:36:11.434 }, 00:36:11.434 { 00:36:11.434 "name": "BaseBdev3", 00:36:11.434 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:11.434 "is_configured": true, 00:36:11.434 "data_offset": 2048, 00:36:11.434 "data_size": 63488 00:36:11.434 } 00:36:11.434 ] 00:36:11.434 }' 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:11.434 14:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.692 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:36:11.692 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:11.692 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:11.692 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.693 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:11.693 [2024-10-09 14:05:18.238028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:11.951 "name": "Existed_Raid", 00:36:11.951 "aliases": [ 00:36:11.951 "cce173a2-a918-4001-826f-5fda7f0748e6" 00:36:11.951 ], 00:36:11.951 "product_name": "Raid Volume", 00:36:11.951 "block_size": 512, 00:36:11.951 "num_blocks": 126976, 00:36:11.951 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:11.951 "assigned_rate_limits": { 00:36:11.951 "rw_ios_per_sec": 0, 00:36:11.951 "rw_mbytes_per_sec": 0, 00:36:11.951 "r_mbytes_per_sec": 0, 00:36:11.951 "w_mbytes_per_sec": 0 00:36:11.951 }, 00:36:11.951 "claimed": false, 00:36:11.951 "zoned": false, 00:36:11.951 "supported_io_types": { 00:36:11.951 "read": true, 00:36:11.951 "write": true, 00:36:11.951 "unmap": false, 00:36:11.951 "flush": false, 00:36:11.951 "reset": true, 00:36:11.951 "nvme_admin": false, 00:36:11.951 "nvme_io": false, 00:36:11.951 "nvme_io_md": false, 00:36:11.951 "write_zeroes": true, 00:36:11.951 "zcopy": false, 00:36:11.951 "get_zone_info": false, 00:36:11.951 "zone_management": false, 00:36:11.951 "zone_append": false, 00:36:11.951 "compare": false, 00:36:11.951 "compare_and_write": false, 00:36:11.951 "abort": false, 00:36:11.951 "seek_hole": false, 00:36:11.951 "seek_data": false, 00:36:11.951 "copy": false, 00:36:11.951 "nvme_iov_md": false 00:36:11.951 }, 00:36:11.951 "driver_specific": { 00:36:11.951 "raid": { 00:36:11.951 "uuid": "cce173a2-a918-4001-826f-5fda7f0748e6", 00:36:11.951 "strip_size_kb": 64, 00:36:11.951 "state": "online", 00:36:11.951 "raid_level": "raid5f", 00:36:11.951 "superblock": true, 00:36:11.951 "num_base_bdevs": 3, 00:36:11.951 "num_base_bdevs_discovered": 3, 00:36:11.951 "num_base_bdevs_operational": 3, 00:36:11.951 "base_bdevs_list": [ 00:36:11.951 { 00:36:11.951 "name": "NewBaseBdev", 00:36:11.951 "uuid": "6a1f268a-3860-4a5f-ad96-154ae7ae2698", 00:36:11.951 "is_configured": true, 00:36:11.951 "data_offset": 2048, 00:36:11.951 "data_size": 63488 00:36:11.951 }, 00:36:11.951 { 00:36:11.951 "name": "BaseBdev2", 00:36:11.951 "uuid": "bac9cd65-ebfc-471b-8f88-80b10c7c7481", 00:36:11.951 "is_configured": true, 00:36:11.951 "data_offset": 2048, 00:36:11.951 "data_size": 63488 00:36:11.951 }, 00:36:11.951 { 00:36:11.951 "name": "BaseBdev3", 00:36:11.951 "uuid": "f6c01313-1f7f-46cf-b203-cce27e48c33d", 00:36:11.951 "is_configured": true, 00:36:11.951 "data_offset": 2048, 00:36:11.951 "data_size": 63488 00:36:11.951 } 00:36:11.951 ] 00:36:11.951 } 00:36:11.951 } 00:36:11.951 }' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:36:11.951 BaseBdev2 00:36:11.951 BaseBdev3' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:11.951 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.209 [2024-10-09 14:05:18.513912] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:12.209 [2024-10-09 14:05:18.513940] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:12.209 [2024-10-09 14:05:18.514006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:12.209 [2024-10-09 14:05:18.514245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:12.209 [2024-10-09 14:05:18.514261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91478 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91478 ']' 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91478 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91478 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:12.209 killing process with pid 91478 00:36:12.209 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91478' 00:36:12.210 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91478 00:36:12.210 [2024-10-09 14:05:18.553769] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:12.210 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91478 00:36:12.210 [2024-10-09 14:05:18.584996] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:12.468 14:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:36:12.468 00:36:12.468 real 0m9.146s 00:36:12.468 user 0m15.746s 00:36:12.468 sys 0m1.964s 00:36:12.468 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.468 14:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.468 ************************************ 00:36:12.468 END TEST raid5f_state_function_test_sb 00:36:12.468 ************************************ 00:36:12.468 14:05:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:36:12.468 14:05:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:12.468 14:05:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:12.468 14:05:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:12.468 ************************************ 00:36:12.468 START TEST raid5f_superblock_test 00:36:12.468 ************************************ 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=92082 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 92082 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 92082 ']' 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:12.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:12.468 14:05:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:12.468 [2024-10-09 14:05:19.015242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:12.468 [2024-10-09 14:05:19.015469] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92082 ] 00:36:12.726 [2024-10-09 14:05:19.193065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.726 [2024-10-09 14:05:19.236919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.985 [2024-10-09 14:05:19.279827] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:12.985 [2024-10-09 14:05:19.279867] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.552 malloc1 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:13.552 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 [2024-10-09 14:05:19.995853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:13.553 [2024-10-09 14:05:19.995921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:13.553 [2024-10-09 14:05:19.995955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:13.553 [2024-10-09 14:05:19.995976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:13.553 [2024-10-09 14:05:19.998464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:13.553 [2024-10-09 14:05:19.998503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:13.553 pt1 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:36:13.553 14:05:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 malloc2 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 [2024-10-09 14:05:20.030960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:13.553 [2024-10-09 14:05:20.031025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:13.553 [2024-10-09 14:05:20.031051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:13.553 [2024-10-09 14:05:20.031070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:13.553 [2024-10-09 14:05:20.034329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:13.553 [2024-10-09 14:05:20.034369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:13.553 pt2 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 malloc3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 [2024-10-09 14:05:20.052151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:13.553 [2024-10-09 14:05:20.052200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:13.553 [2024-10-09 14:05:20.052221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:13.553 [2024-10-09 14:05:20.052236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:13.553 [2024-10-09 14:05:20.054725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:13.553 [2024-10-09 14:05:20.054761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:13.553 pt3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 [2024-10-09 14:05:20.060213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:13.553 [2024-10-09 14:05:20.062446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:13.553 [2024-10-09 14:05:20.062515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:13.553 [2024-10-09 14:05:20.062691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:36:13.553 [2024-10-09 14:05:20.062709] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:13.553 [2024-10-09 14:05:20.062977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:36:13.553 [2024-10-09 14:05:20.063405] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:36:13.553 [2024-10-09 14:05:20.063430] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:36:13.553 [2024-10-09 14:05:20.063544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:13.553 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:13.811 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:13.811 "name": "raid_bdev1", 00:36:13.811 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:13.811 "strip_size_kb": 64, 00:36:13.811 "state": "online", 00:36:13.811 "raid_level": "raid5f", 00:36:13.811 "superblock": true, 00:36:13.811 "num_base_bdevs": 3, 00:36:13.811 "num_base_bdevs_discovered": 3, 00:36:13.811 "num_base_bdevs_operational": 3, 00:36:13.811 "base_bdevs_list": [ 00:36:13.811 { 00:36:13.811 "name": "pt1", 00:36:13.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:13.811 "is_configured": true, 00:36:13.811 "data_offset": 2048, 00:36:13.811 "data_size": 63488 00:36:13.811 }, 00:36:13.811 { 00:36:13.811 "name": "pt2", 00:36:13.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:13.811 "is_configured": true, 00:36:13.811 "data_offset": 2048, 00:36:13.811 "data_size": 63488 00:36:13.811 }, 00:36:13.811 { 00:36:13.811 "name": "pt3", 00:36:13.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:13.811 "is_configured": true, 00:36:13.811 "data_offset": 2048, 00:36:13.811 "data_size": 63488 00:36:13.811 } 00:36:13.811 ] 00:36:13.811 }' 00:36:13.811 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:13.811 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.070 [2024-10-09 14:05:20.513305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:14.070 "name": "raid_bdev1", 00:36:14.070 "aliases": [ 00:36:14.070 "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9" 00:36:14.070 ], 00:36:14.070 "product_name": "Raid Volume", 00:36:14.070 "block_size": 512, 00:36:14.070 "num_blocks": 126976, 00:36:14.070 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:14.070 "assigned_rate_limits": { 00:36:14.070 "rw_ios_per_sec": 0, 00:36:14.070 "rw_mbytes_per_sec": 0, 00:36:14.070 "r_mbytes_per_sec": 0, 00:36:14.070 "w_mbytes_per_sec": 0 00:36:14.070 }, 00:36:14.070 "claimed": false, 00:36:14.070 "zoned": false, 00:36:14.070 "supported_io_types": { 00:36:14.070 "read": true, 00:36:14.070 "write": true, 00:36:14.070 "unmap": false, 00:36:14.070 "flush": false, 00:36:14.070 "reset": true, 00:36:14.070 "nvme_admin": false, 00:36:14.070 "nvme_io": false, 00:36:14.070 "nvme_io_md": false, 00:36:14.070 "write_zeroes": true, 00:36:14.070 "zcopy": false, 00:36:14.070 "get_zone_info": false, 00:36:14.070 "zone_management": false, 00:36:14.070 "zone_append": false, 00:36:14.070 "compare": false, 00:36:14.070 "compare_and_write": false, 00:36:14.070 "abort": false, 00:36:14.070 "seek_hole": false, 00:36:14.070 "seek_data": false, 00:36:14.070 "copy": false, 00:36:14.070 "nvme_iov_md": false 00:36:14.070 }, 00:36:14.070 "driver_specific": { 00:36:14.070 "raid": { 00:36:14.070 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:14.070 "strip_size_kb": 64, 00:36:14.070 "state": "online", 00:36:14.070 "raid_level": "raid5f", 00:36:14.070 "superblock": true, 00:36:14.070 "num_base_bdevs": 3, 00:36:14.070 "num_base_bdevs_discovered": 3, 00:36:14.070 "num_base_bdevs_operational": 3, 00:36:14.070 "base_bdevs_list": [ 00:36:14.070 { 00:36:14.070 "name": "pt1", 00:36:14.070 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:14.070 "is_configured": true, 00:36:14.070 "data_offset": 2048, 00:36:14.070 "data_size": 63488 00:36:14.070 }, 00:36:14.070 { 00:36:14.070 "name": "pt2", 00:36:14.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:14.070 "is_configured": true, 00:36:14.070 "data_offset": 2048, 00:36:14.070 "data_size": 63488 00:36:14.070 }, 00:36:14.070 { 00:36:14.070 "name": "pt3", 00:36:14.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:14.070 "is_configured": true, 00:36:14.070 "data_offset": 2048, 00:36:14.070 "data_size": 63488 00:36:14.070 } 00:36:14.070 ] 00:36:14.070 } 00:36:14.070 } 00:36:14.070 }' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:14.070 pt2 00:36:14.070 pt3' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:14.070 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:14.328 [2024-10-09 14:05:20.757329] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 ']' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 [2024-10-09 14:05:20.797158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:14.328 [2024-10-09 14:05:20.797185] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:14.328 [2024-10-09 14:05:20.797268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:14.328 [2024-10-09 14:05:20.797336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:14.328 [2024-10-09 14:05:20.797357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.328 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.613 [2024-10-09 14:05:20.925214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:14.613 [2024-10-09 14:05:20.927443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:14.613 [2024-10-09 14:05:20.927492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:36:14.613 [2024-10-09 14:05:20.927545] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:14.613 [2024-10-09 14:05:20.927607] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:14.613 [2024-10-09 14:05:20.927631] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:36:14.613 [2024-10-09 14:05:20.927647] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:14.613 [2024-10-09 14:05:20.927662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:36:14.613 request: 00:36:14.613 { 00:36:14.613 "name": "raid_bdev1", 00:36:14.613 "raid_level": "raid5f", 00:36:14.613 "base_bdevs": [ 00:36:14.613 "malloc1", 00:36:14.613 "malloc2", 00:36:14.613 "malloc3" 00:36:14.613 ], 00:36:14.613 "strip_size_kb": 64, 00:36:14.613 "superblock": false, 00:36:14.613 "method": "bdev_raid_create", 00:36:14.613 "req_id": 1 00:36:14.613 } 00:36:14.613 Got JSON-RPC error response 00:36:14.613 response: 00:36:14.613 { 00:36:14.613 "code": -17, 00:36:14.613 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:14.613 } 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.613 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.613 [2024-10-09 14:05:20.985197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:14.613 [2024-10-09 14:05:20.985244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:14.614 [2024-10-09 14:05:20.985263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:14.614 [2024-10-09 14:05:20.985277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:14.614 [2024-10-09 14:05:20.987777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:14.614 [2024-10-09 14:05:20.987813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:14.614 [2024-10-09 14:05:20.987879] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:14.614 [2024-10-09 14:05:20.987916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:14.614 pt1 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:14.614 14:05:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:14.614 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:14.614 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:14.614 "name": "raid_bdev1", 00:36:14.614 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:14.614 "strip_size_kb": 64, 00:36:14.614 "state": "configuring", 00:36:14.614 "raid_level": "raid5f", 00:36:14.614 "superblock": true, 00:36:14.614 "num_base_bdevs": 3, 00:36:14.614 "num_base_bdevs_discovered": 1, 00:36:14.614 "num_base_bdevs_operational": 3, 00:36:14.614 "base_bdevs_list": [ 00:36:14.614 { 00:36:14.614 "name": "pt1", 00:36:14.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:14.614 "is_configured": true, 00:36:14.614 "data_offset": 2048, 00:36:14.614 "data_size": 63488 00:36:14.614 }, 00:36:14.614 { 00:36:14.614 "name": null, 00:36:14.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:14.614 "is_configured": false, 00:36:14.614 "data_offset": 2048, 00:36:14.614 "data_size": 63488 00:36:14.614 }, 00:36:14.614 { 00:36:14.614 "name": null, 00:36:14.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:14.614 "is_configured": false, 00:36:14.614 "data_offset": 2048, 00:36:14.614 "data_size": 63488 00:36:14.614 } 00:36:14.614 ] 00:36:14.614 }' 00:36:14.614 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:14.614 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.193 [2024-10-09 14:05:21.449316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:15.193 [2024-10-09 14:05:21.449373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.193 [2024-10-09 14:05:21.449393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:15.193 [2024-10-09 14:05:21.449411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.193 [2024-10-09 14:05:21.449819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.193 [2024-10-09 14:05:21.449843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:15.193 [2024-10-09 14:05:21.449910] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:15.193 [2024-10-09 14:05:21.449936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:15.193 pt2 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.193 [2024-10-09 14:05:21.457316] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.193 "name": "raid_bdev1", 00:36:15.193 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:15.193 "strip_size_kb": 64, 00:36:15.193 "state": "configuring", 00:36:15.193 "raid_level": "raid5f", 00:36:15.193 "superblock": true, 00:36:15.193 "num_base_bdevs": 3, 00:36:15.193 "num_base_bdevs_discovered": 1, 00:36:15.193 "num_base_bdevs_operational": 3, 00:36:15.193 "base_bdevs_list": [ 00:36:15.193 { 00:36:15.193 "name": "pt1", 00:36:15.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:15.193 "is_configured": true, 00:36:15.193 "data_offset": 2048, 00:36:15.193 "data_size": 63488 00:36:15.193 }, 00:36:15.193 { 00:36:15.193 "name": null, 00:36:15.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:15.193 "is_configured": false, 00:36:15.193 "data_offset": 0, 00:36:15.193 "data_size": 63488 00:36:15.193 }, 00:36:15.193 { 00:36:15.193 "name": null, 00:36:15.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:15.193 "is_configured": false, 00:36:15.193 "data_offset": 2048, 00:36:15.193 "data_size": 63488 00:36:15.193 } 00:36:15.193 ] 00:36:15.193 }' 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.193 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.452 [2024-10-09 14:05:21.905402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:15.452 [2024-10-09 14:05:21.905460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.452 [2024-10-09 14:05:21.905483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:36:15.452 [2024-10-09 14:05:21.905495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.452 [2024-10-09 14:05:21.905909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.452 [2024-10-09 14:05:21.905929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:15.452 [2024-10-09 14:05:21.905998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:15.452 [2024-10-09 14:05:21.906021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:15.452 pt2 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.452 [2024-10-09 14:05:21.917387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:15.452 [2024-10-09 14:05:21.917429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.452 [2024-10-09 14:05:21.917450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:15.452 [2024-10-09 14:05:21.917460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.452 [2024-10-09 14:05:21.917828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.452 [2024-10-09 14:05:21.917851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:15.452 [2024-10-09 14:05:21.917911] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:15.452 [2024-10-09 14:05:21.917931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:15.452 [2024-10-09 14:05:21.918030] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:36:15.452 [2024-10-09 14:05:21.918041] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:15.452 [2024-10-09 14:05:21.918274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:15.452 [2024-10-09 14:05:21.918694] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:36:15.452 [2024-10-09 14:05:21.918709] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:36:15.452 [2024-10-09 14:05:21.918806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:15.452 pt3 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:15.452 "name": "raid_bdev1", 00:36:15.452 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:15.452 "strip_size_kb": 64, 00:36:15.452 "state": "online", 00:36:15.452 "raid_level": "raid5f", 00:36:15.452 "superblock": true, 00:36:15.452 "num_base_bdevs": 3, 00:36:15.452 "num_base_bdevs_discovered": 3, 00:36:15.452 "num_base_bdevs_operational": 3, 00:36:15.452 "base_bdevs_list": [ 00:36:15.452 { 00:36:15.452 "name": "pt1", 00:36:15.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:15.452 "is_configured": true, 00:36:15.452 "data_offset": 2048, 00:36:15.452 "data_size": 63488 00:36:15.452 }, 00:36:15.452 { 00:36:15.452 "name": "pt2", 00:36:15.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:15.452 "is_configured": true, 00:36:15.452 "data_offset": 2048, 00:36:15.452 "data_size": 63488 00:36:15.452 }, 00:36:15.452 { 00:36:15.452 "name": "pt3", 00:36:15.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:15.452 "is_configured": true, 00:36:15.452 "data_offset": 2048, 00:36:15.452 "data_size": 63488 00:36:15.452 } 00:36:15.452 ] 00:36:15.452 }' 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:15.452 14:05:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.019 [2024-10-09 14:05:22.389708] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:16.019 "name": "raid_bdev1", 00:36:16.019 "aliases": [ 00:36:16.019 "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9" 00:36:16.019 ], 00:36:16.019 "product_name": "Raid Volume", 00:36:16.019 "block_size": 512, 00:36:16.019 "num_blocks": 126976, 00:36:16.019 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:16.019 "assigned_rate_limits": { 00:36:16.019 "rw_ios_per_sec": 0, 00:36:16.019 "rw_mbytes_per_sec": 0, 00:36:16.019 "r_mbytes_per_sec": 0, 00:36:16.019 "w_mbytes_per_sec": 0 00:36:16.019 }, 00:36:16.019 "claimed": false, 00:36:16.019 "zoned": false, 00:36:16.019 "supported_io_types": { 00:36:16.019 "read": true, 00:36:16.019 "write": true, 00:36:16.019 "unmap": false, 00:36:16.019 "flush": false, 00:36:16.019 "reset": true, 00:36:16.019 "nvme_admin": false, 00:36:16.019 "nvme_io": false, 00:36:16.019 "nvme_io_md": false, 00:36:16.019 "write_zeroes": true, 00:36:16.019 "zcopy": false, 00:36:16.019 "get_zone_info": false, 00:36:16.019 "zone_management": false, 00:36:16.019 "zone_append": false, 00:36:16.019 "compare": false, 00:36:16.019 "compare_and_write": false, 00:36:16.019 "abort": false, 00:36:16.019 "seek_hole": false, 00:36:16.019 "seek_data": false, 00:36:16.019 "copy": false, 00:36:16.019 "nvme_iov_md": false 00:36:16.019 }, 00:36:16.019 "driver_specific": { 00:36:16.019 "raid": { 00:36:16.019 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:16.019 "strip_size_kb": 64, 00:36:16.019 "state": "online", 00:36:16.019 "raid_level": "raid5f", 00:36:16.019 "superblock": true, 00:36:16.019 "num_base_bdevs": 3, 00:36:16.019 "num_base_bdevs_discovered": 3, 00:36:16.019 "num_base_bdevs_operational": 3, 00:36:16.019 "base_bdevs_list": [ 00:36:16.019 { 00:36:16.019 "name": "pt1", 00:36:16.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:16.019 "is_configured": true, 00:36:16.019 "data_offset": 2048, 00:36:16.019 "data_size": 63488 00:36:16.019 }, 00:36:16.019 { 00:36:16.019 "name": "pt2", 00:36:16.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:16.019 "is_configured": true, 00:36:16.019 "data_offset": 2048, 00:36:16.019 "data_size": 63488 00:36:16.019 }, 00:36:16.019 { 00:36:16.019 "name": "pt3", 00:36:16.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:16.019 "is_configured": true, 00:36:16.019 "data_offset": 2048, 00:36:16.019 "data_size": 63488 00:36:16.019 } 00:36:16.019 ] 00:36:16.019 } 00:36:16.019 } 00:36:16.019 }' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:16.019 pt2 00:36:16.019 pt3' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.019 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.277 [2024-10-09 14:05:22.661754] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 '!=' 6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 ']' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.277 [2024-10-09 14:05:22.697595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.277 "name": "raid_bdev1", 00:36:16.277 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:16.277 "strip_size_kb": 64, 00:36:16.277 "state": "online", 00:36:16.277 "raid_level": "raid5f", 00:36:16.277 "superblock": true, 00:36:16.277 "num_base_bdevs": 3, 00:36:16.277 "num_base_bdevs_discovered": 2, 00:36:16.277 "num_base_bdevs_operational": 2, 00:36:16.277 "base_bdevs_list": [ 00:36:16.277 { 00:36:16.277 "name": null, 00:36:16.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.277 "is_configured": false, 00:36:16.277 "data_offset": 0, 00:36:16.277 "data_size": 63488 00:36:16.277 }, 00:36:16.277 { 00:36:16.277 "name": "pt2", 00:36:16.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:16.277 "is_configured": true, 00:36:16.277 "data_offset": 2048, 00:36:16.277 "data_size": 63488 00:36:16.277 }, 00:36:16.277 { 00:36:16.277 "name": "pt3", 00:36:16.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:16.277 "is_configured": true, 00:36:16.277 "data_offset": 2048, 00:36:16.277 "data_size": 63488 00:36:16.277 } 00:36:16.277 ] 00:36:16.277 }' 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.277 14:05:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 [2024-10-09 14:05:23.121675] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:16.844 [2024-10-09 14:05:23.121715] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:16.844 [2024-10-09 14:05:23.121802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:16.844 [2024-10-09 14:05:23.121866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:16.844 [2024-10-09 14:05:23.121878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 [2024-10-09 14:05:23.185713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:16.844 [2024-10-09 14:05:23.185761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:16.844 [2024-10-09 14:05:23.185782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:36:16.844 [2024-10-09 14:05:23.185793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:16.844 [2024-10-09 14:05:23.188271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:16.844 [2024-10-09 14:05:23.188305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:16.844 [2024-10-09 14:05:23.188374] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:16.844 [2024-10-09 14:05:23.188411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:16.844 pt2 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.844 "name": "raid_bdev1", 00:36:16.844 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:16.844 "strip_size_kb": 64, 00:36:16.844 "state": "configuring", 00:36:16.844 "raid_level": "raid5f", 00:36:16.844 "superblock": true, 00:36:16.844 "num_base_bdevs": 3, 00:36:16.844 "num_base_bdevs_discovered": 1, 00:36:16.844 "num_base_bdevs_operational": 2, 00:36:16.844 "base_bdevs_list": [ 00:36:16.844 { 00:36:16.844 "name": null, 00:36:16.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.844 "is_configured": false, 00:36:16.844 "data_offset": 2048, 00:36:16.844 "data_size": 63488 00:36:16.844 }, 00:36:16.844 { 00:36:16.844 "name": "pt2", 00:36:16.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:16.844 "is_configured": true, 00:36:16.844 "data_offset": 2048, 00:36:16.844 "data_size": 63488 00:36:16.844 }, 00:36:16.844 { 00:36:16.844 "name": null, 00:36:16.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:16.844 "is_configured": false, 00:36:16.844 "data_offset": 2048, 00:36:16.844 "data_size": 63488 00:36:16.844 } 00:36:16.844 ] 00:36:16.844 }' 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.844 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.103 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.103 [2024-10-09 14:05:23.597811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:17.103 [2024-10-09 14:05:23.597865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:17.103 [2024-10-09 14:05:23.597890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:17.103 [2024-10-09 14:05:23.597901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:17.103 [2024-10-09 14:05:23.598300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:17.103 [2024-10-09 14:05:23.598318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:17.103 [2024-10-09 14:05:23.598391] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:17.103 [2024-10-09 14:05:23.598419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:17.103 [2024-10-09 14:05:23.598507] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:36:17.103 [2024-10-09 14:05:23.598517] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:17.103 [2024-10-09 14:05:23.598778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:17.103 [2024-10-09 14:05:23.599251] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:36:17.103 [2024-10-09 14:05:23.599277] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:36:17.104 [2024-10-09 14:05:23.599500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:17.104 pt3 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.104 "name": "raid_bdev1", 00:36:17.104 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:17.104 "strip_size_kb": 64, 00:36:17.104 "state": "online", 00:36:17.104 "raid_level": "raid5f", 00:36:17.104 "superblock": true, 00:36:17.104 "num_base_bdevs": 3, 00:36:17.104 "num_base_bdevs_discovered": 2, 00:36:17.104 "num_base_bdevs_operational": 2, 00:36:17.104 "base_bdevs_list": [ 00:36:17.104 { 00:36:17.104 "name": null, 00:36:17.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.104 "is_configured": false, 00:36:17.104 "data_offset": 2048, 00:36:17.104 "data_size": 63488 00:36:17.104 }, 00:36:17.104 { 00:36:17.104 "name": "pt2", 00:36:17.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:17.104 "is_configured": true, 00:36:17.104 "data_offset": 2048, 00:36:17.104 "data_size": 63488 00:36:17.104 }, 00:36:17.104 { 00:36:17.104 "name": "pt3", 00:36:17.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:17.104 "is_configured": true, 00:36:17.104 "data_offset": 2048, 00:36:17.104 "data_size": 63488 00:36:17.104 } 00:36:17.104 ] 00:36:17.104 }' 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.104 14:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.672 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:17.672 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.672 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.672 [2024-10-09 14:05:24.041899] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:17.672 [2024-10-09 14:05:24.042061] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:17.672 [2024-10-09 14:05:24.042174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:17.672 [2024-10-09 14:05:24.042236] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:17.672 [2024-10-09 14:05:24.042263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.673 [2024-10-09 14:05:24.105897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:17.673 [2024-10-09 14:05:24.105959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:17.673 [2024-10-09 14:05:24.105979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:17.673 [2024-10-09 14:05:24.105994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:17.673 [2024-10-09 14:05:24.108649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:17.673 [2024-10-09 14:05:24.108797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:17.673 [2024-10-09 14:05:24.108904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:17.673 [2024-10-09 14:05:24.108951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:17.673 [2024-10-09 14:05:24.109074] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:17.673 [2024-10-09 14:05:24.109102] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:17.673 [2024-10-09 14:05:24.109122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:36:17.673 [2024-10-09 14:05:24.109168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:17.673 pt1 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.673 "name": "raid_bdev1", 00:36:17.673 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:17.673 "strip_size_kb": 64, 00:36:17.673 "state": "configuring", 00:36:17.673 "raid_level": "raid5f", 00:36:17.673 "superblock": true, 00:36:17.673 "num_base_bdevs": 3, 00:36:17.673 "num_base_bdevs_discovered": 1, 00:36:17.673 "num_base_bdevs_operational": 2, 00:36:17.673 "base_bdevs_list": [ 00:36:17.673 { 00:36:17.673 "name": null, 00:36:17.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:17.673 "is_configured": false, 00:36:17.673 "data_offset": 2048, 00:36:17.673 "data_size": 63488 00:36:17.673 }, 00:36:17.673 { 00:36:17.673 "name": "pt2", 00:36:17.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:17.673 "is_configured": true, 00:36:17.673 "data_offset": 2048, 00:36:17.673 "data_size": 63488 00:36:17.673 }, 00:36:17.673 { 00:36:17.673 "name": null, 00:36:17.673 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:17.673 "is_configured": false, 00:36:17.673 "data_offset": 2048, 00:36:17.673 "data_size": 63488 00:36:17.673 } 00:36:17.673 ] 00:36:17.673 }' 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.673 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.241 [2024-10-09 14:05:24.638038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:36:18.241 [2024-10-09 14:05:24.638225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:18.241 [2024-10-09 14:05:24.638256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:36:18.241 [2024-10-09 14:05:24.638272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:18.241 [2024-10-09 14:05:24.638725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:18.241 [2024-10-09 14:05:24.638754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:36:18.241 [2024-10-09 14:05:24.638831] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:36:18.241 [2024-10-09 14:05:24.638859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:36:18.241 [2024-10-09 14:05:24.638948] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:36:18.241 [2024-10-09 14:05:24.638962] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:18.241 [2024-10-09 14:05:24.639213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:18.241 [2024-10-09 14:05:24.639729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:36:18.241 [2024-10-09 14:05:24.639748] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:36:18.241 [2024-10-09 14:05:24.639917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:18.241 pt3 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.241 "name": "raid_bdev1", 00:36:18.241 "uuid": "6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9", 00:36:18.241 "strip_size_kb": 64, 00:36:18.241 "state": "online", 00:36:18.241 "raid_level": "raid5f", 00:36:18.241 "superblock": true, 00:36:18.241 "num_base_bdevs": 3, 00:36:18.241 "num_base_bdevs_discovered": 2, 00:36:18.241 "num_base_bdevs_operational": 2, 00:36:18.241 "base_bdevs_list": [ 00:36:18.241 { 00:36:18.241 "name": null, 00:36:18.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.241 "is_configured": false, 00:36:18.241 "data_offset": 2048, 00:36:18.241 "data_size": 63488 00:36:18.241 }, 00:36:18.241 { 00:36:18.241 "name": "pt2", 00:36:18.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:18.241 "is_configured": true, 00:36:18.241 "data_offset": 2048, 00:36:18.241 "data_size": 63488 00:36:18.241 }, 00:36:18.241 { 00:36:18.241 "name": "pt3", 00:36:18.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:36:18.241 "is_configured": true, 00:36:18.241 "data_offset": 2048, 00:36:18.241 "data_size": 63488 00:36:18.241 } 00:36:18.241 ] 00:36:18.241 }' 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.241 14:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.809 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:18.809 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:36:18.810 [2024-10-09 14:05:25.122349] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 '!=' 6a77e9bc-cfb7-497a-9944-eb2e3e3c22b9 ']' 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 92082 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 92082 ']' 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 92082 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92082 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:18.810 killing process with pid 92082 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92082' 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 92082 00:36:18.810 [2024-10-09 14:05:25.203496] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:18.810 [2024-10-09 14:05:25.203580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:18.810 [2024-10-09 14:05:25.203642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:18.810 [2024-10-09 14:05:25.203654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:36:18.810 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 92082 00:36:18.810 [2024-10-09 14:05:25.238769] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:19.069 14:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:36:19.069 00:36:19.069 real 0m6.568s 00:36:19.069 user 0m11.086s 00:36:19.069 sys 0m1.434s 00:36:19.069 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:19.069 ************************************ 00:36:19.069 END TEST raid5f_superblock_test 00:36:19.069 ************************************ 00:36:19.069 14:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 14:05:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:36:19.069 14:05:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:36:19.069 14:05:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:36:19.069 14:05:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:19.069 14:05:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:19.069 ************************************ 00:36:19.069 START TEST raid5f_rebuild_test 00:36:19.069 ************************************ 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92509 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:19.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92509 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92509 ']' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:19.069 14:05:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:19.329 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:19.329 Zero copy mechanism will not be used. 00:36:19.329 [2024-10-09 14:05:25.662632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:19.329 [2024-10-09 14:05:25.662810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92509 ] 00:36:19.329 [2024-10-09 14:05:25.841798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.586 [2024-10-09 14:05:25.885212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.586 [2024-10-09 14:05:25.928318] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:19.586 [2024-10-09 14:05:25.928353] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.154 BaseBdev1_malloc 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.154 [2024-10-09 14:05:26.608070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:20.154 [2024-10-09 14:05:26.608147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.154 [2024-10-09 14:05:26.608173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:20.154 [2024-10-09 14:05:26.608191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.154 [2024-10-09 14:05:26.610734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.154 [2024-10-09 14:05:26.610918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:20.154 BaseBdev1 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.154 BaseBdev2_malloc 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.154 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.155 [2024-10-09 14:05:26.644968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:20.155 [2024-10-09 14:05:26.645140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.155 [2024-10-09 14:05:26.645172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:20.155 [2024-10-09 14:05:26.645185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.155 [2024-10-09 14:05:26.647810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.155 [2024-10-09 14:05:26.647847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:20.155 BaseBdev2 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.155 BaseBdev3_malloc 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.155 [2024-10-09 14:05:26.673900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:20.155 [2024-10-09 14:05:26.673948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.155 [2024-10-09 14:05:26.673974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:20.155 [2024-10-09 14:05:26.673986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.155 [2024-10-09 14:05:26.676384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.155 [2024-10-09 14:05:26.676422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:20.155 BaseBdev3 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.155 spare_malloc 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.155 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.414 spare_delay 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.414 [2024-10-09 14:05:26.714919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:20.414 [2024-10-09 14:05:26.714978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.414 [2024-10-09 14:05:26.715023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:20.414 [2024-10-09 14:05:26.715034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.414 [2024-10-09 14:05:26.717459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.414 [2024-10-09 14:05:26.717614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:20.414 spare 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.414 [2024-10-09 14:05:26.726987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:20.414 [2024-10-09 14:05:26.729106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:20.414 [2024-10-09 14:05:26.729311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:20.414 [2024-10-09 14:05:26.729396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:36:20.414 [2024-10-09 14:05:26.729409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:36:20.414 [2024-10-09 14:05:26.729717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:20.414 [2024-10-09 14:05:26.730123] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:36:20.414 [2024-10-09 14:05:26.730140] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:36:20.414 [2024-10-09 14:05:26.730262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.414 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.414 "name": "raid_bdev1", 00:36:20.414 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:20.414 "strip_size_kb": 64, 00:36:20.414 "state": "online", 00:36:20.414 "raid_level": "raid5f", 00:36:20.414 "superblock": false, 00:36:20.414 "num_base_bdevs": 3, 00:36:20.414 "num_base_bdevs_discovered": 3, 00:36:20.414 "num_base_bdevs_operational": 3, 00:36:20.414 "base_bdevs_list": [ 00:36:20.414 { 00:36:20.414 "name": "BaseBdev1", 00:36:20.414 "uuid": "34cfaab1-dff7-5ab5-9b88-6f4a76b02642", 00:36:20.414 "is_configured": true, 00:36:20.414 "data_offset": 0, 00:36:20.414 "data_size": 65536 00:36:20.414 }, 00:36:20.414 { 00:36:20.414 "name": "BaseBdev2", 00:36:20.414 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:20.414 "is_configured": true, 00:36:20.414 "data_offset": 0, 00:36:20.414 "data_size": 65536 00:36:20.414 }, 00:36:20.414 { 00:36:20.415 "name": "BaseBdev3", 00:36:20.415 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:20.415 "is_configured": true, 00:36:20.415 "data_offset": 0, 00:36:20.415 "data_size": 65536 00:36:20.415 } 00:36:20.415 ] 00:36:20.415 }' 00:36:20.415 14:05:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.415 14:05:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:20.673 [2024-10-09 14:05:27.152085] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:20.673 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:20.932 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:21.191 [2024-10-09 14:05:27.520069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:21.191 /dev/nbd0 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:36:21.191 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:21.192 1+0 records in 00:36:21.192 1+0 records out 00:36:21.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028349 s, 14.4 MB/s 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:36:21.192 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:36:21.450 512+0 records in 00:36:21.450 512+0 records out 00:36:21.450 67108864 bytes (67 MB, 64 MiB) copied, 0.330457 s, 203 MB/s 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:21.450 14:05:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:21.709 [2024-10-09 14:05:28.207013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.709 [2024-10-09 14:05:28.227113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:21.709 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.968 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:21.968 "name": "raid_bdev1", 00:36:21.968 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:21.968 "strip_size_kb": 64, 00:36:21.968 "state": "online", 00:36:21.968 "raid_level": "raid5f", 00:36:21.968 "superblock": false, 00:36:21.968 "num_base_bdevs": 3, 00:36:21.968 "num_base_bdevs_discovered": 2, 00:36:21.968 "num_base_bdevs_operational": 2, 00:36:21.968 "base_bdevs_list": [ 00:36:21.968 { 00:36:21.968 "name": null, 00:36:21.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.968 "is_configured": false, 00:36:21.968 "data_offset": 0, 00:36:21.968 "data_size": 65536 00:36:21.968 }, 00:36:21.968 { 00:36:21.968 "name": "BaseBdev2", 00:36:21.968 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:21.968 "is_configured": true, 00:36:21.968 "data_offset": 0, 00:36:21.968 "data_size": 65536 00:36:21.968 }, 00:36:21.968 { 00:36:21.968 "name": "BaseBdev3", 00:36:21.968 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:21.968 "is_configured": true, 00:36:21.968 "data_offset": 0, 00:36:21.968 "data_size": 65536 00:36:21.968 } 00:36:21.968 ] 00:36:21.968 }' 00:36:21.968 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:21.968 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.227 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:22.227 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:22.227 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:22.227 [2024-10-09 14:05:28.695246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:22.227 [2024-10-09 14:05:28.699044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:36:22.227 [2024-10-09 14:05:28.701562] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:22.227 14:05:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:22.227 14:05:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:36:23.162 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:23.162 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:23.162 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:23.162 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:23.162 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:23.421 "name": "raid_bdev1", 00:36:23.421 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:23.421 "strip_size_kb": 64, 00:36:23.421 "state": "online", 00:36:23.421 "raid_level": "raid5f", 00:36:23.421 "superblock": false, 00:36:23.421 "num_base_bdevs": 3, 00:36:23.421 "num_base_bdevs_discovered": 3, 00:36:23.421 "num_base_bdevs_operational": 3, 00:36:23.421 "process": { 00:36:23.421 "type": "rebuild", 00:36:23.421 "target": "spare", 00:36:23.421 "progress": { 00:36:23.421 "blocks": 20480, 00:36:23.421 "percent": 15 00:36:23.421 } 00:36:23.421 }, 00:36:23.421 "base_bdevs_list": [ 00:36:23.421 { 00:36:23.421 "name": "spare", 00:36:23.421 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:23.421 "is_configured": true, 00:36:23.421 "data_offset": 0, 00:36:23.421 "data_size": 65536 00:36:23.421 }, 00:36:23.421 { 00:36:23.421 "name": "BaseBdev2", 00:36:23.421 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:23.421 "is_configured": true, 00:36:23.421 "data_offset": 0, 00:36:23.421 "data_size": 65536 00:36:23.421 }, 00:36:23.421 { 00:36:23.421 "name": "BaseBdev3", 00:36:23.421 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:23.421 "is_configured": true, 00:36:23.421 "data_offset": 0, 00:36:23.421 "data_size": 65536 00:36:23.421 } 00:36:23.421 ] 00:36:23.421 }' 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.421 [2024-10-09 14:05:29.854900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:23.421 [2024-10-09 14:05:29.911618] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:23.421 [2024-10-09 14:05:29.911698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:23.421 [2024-10-09 14:05:29.911717] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:23.421 [2024-10-09 14:05:29.911731] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.421 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.680 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.680 "name": "raid_bdev1", 00:36:23.680 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:23.680 "strip_size_kb": 64, 00:36:23.680 "state": "online", 00:36:23.680 "raid_level": "raid5f", 00:36:23.680 "superblock": false, 00:36:23.680 "num_base_bdevs": 3, 00:36:23.680 "num_base_bdevs_discovered": 2, 00:36:23.680 "num_base_bdevs_operational": 2, 00:36:23.680 "base_bdevs_list": [ 00:36:23.680 { 00:36:23.680 "name": null, 00:36:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.680 "is_configured": false, 00:36:23.680 "data_offset": 0, 00:36:23.680 "data_size": 65536 00:36:23.680 }, 00:36:23.680 { 00:36:23.680 "name": "BaseBdev2", 00:36:23.680 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:23.680 "is_configured": true, 00:36:23.680 "data_offset": 0, 00:36:23.680 "data_size": 65536 00:36:23.680 }, 00:36:23.680 { 00:36:23.680 "name": "BaseBdev3", 00:36:23.680 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:23.680 "is_configured": true, 00:36:23.680 "data_offset": 0, 00:36:23.680 "data_size": 65536 00:36:23.680 } 00:36:23.680 ] 00:36:23.680 }' 00:36:23.680 14:05:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.680 14:05:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:23.939 "name": "raid_bdev1", 00:36:23.939 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:23.939 "strip_size_kb": 64, 00:36:23.939 "state": "online", 00:36:23.939 "raid_level": "raid5f", 00:36:23.939 "superblock": false, 00:36:23.939 "num_base_bdevs": 3, 00:36:23.939 "num_base_bdevs_discovered": 2, 00:36:23.939 "num_base_bdevs_operational": 2, 00:36:23.939 "base_bdevs_list": [ 00:36:23.939 { 00:36:23.939 "name": null, 00:36:23.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.939 "is_configured": false, 00:36:23.939 "data_offset": 0, 00:36:23.939 "data_size": 65536 00:36:23.939 }, 00:36:23.939 { 00:36:23.939 "name": "BaseBdev2", 00:36:23.939 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:23.939 "is_configured": true, 00:36:23.939 "data_offset": 0, 00:36:23.939 "data_size": 65536 00:36:23.939 }, 00:36:23.939 { 00:36:23.939 "name": "BaseBdev3", 00:36:23.939 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:23.939 "is_configured": true, 00:36:23.939 "data_offset": 0, 00:36:23.939 "data_size": 65536 00:36:23.939 } 00:36:23.939 ] 00:36:23.939 }' 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:23.939 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:24.198 [2024-10-09 14:05:30.517871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:24.198 [2024-10-09 14:05:30.521804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:24.198 14:05:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:36:24.198 [2024-10-09 14:05:30.524308] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:25.133 "name": "raid_bdev1", 00:36:25.133 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:25.133 "strip_size_kb": 64, 00:36:25.133 "state": "online", 00:36:25.133 "raid_level": "raid5f", 00:36:25.133 "superblock": false, 00:36:25.133 "num_base_bdevs": 3, 00:36:25.133 "num_base_bdevs_discovered": 3, 00:36:25.133 "num_base_bdevs_operational": 3, 00:36:25.133 "process": { 00:36:25.133 "type": "rebuild", 00:36:25.133 "target": "spare", 00:36:25.133 "progress": { 00:36:25.133 "blocks": 20480, 00:36:25.133 "percent": 15 00:36:25.133 } 00:36:25.133 }, 00:36:25.133 "base_bdevs_list": [ 00:36:25.133 { 00:36:25.133 "name": "spare", 00:36:25.133 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:25.133 "is_configured": true, 00:36:25.133 "data_offset": 0, 00:36:25.133 "data_size": 65536 00:36:25.133 }, 00:36:25.133 { 00:36:25.133 "name": "BaseBdev2", 00:36:25.133 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:25.133 "is_configured": true, 00:36:25.133 "data_offset": 0, 00:36:25.133 "data_size": 65536 00:36:25.133 }, 00:36:25.133 { 00:36:25.133 "name": "BaseBdev3", 00:36:25.133 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:25.133 "is_configured": true, 00:36:25.133 "data_offset": 0, 00:36:25.133 "data_size": 65536 00:36:25.133 } 00:36:25.133 ] 00:36:25.133 }' 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:25.133 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=465 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.134 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:25.392 14:05:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.392 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:25.392 "name": "raid_bdev1", 00:36:25.392 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:25.392 "strip_size_kb": 64, 00:36:25.392 "state": "online", 00:36:25.392 "raid_level": "raid5f", 00:36:25.392 "superblock": false, 00:36:25.392 "num_base_bdevs": 3, 00:36:25.392 "num_base_bdevs_discovered": 3, 00:36:25.392 "num_base_bdevs_operational": 3, 00:36:25.392 "process": { 00:36:25.392 "type": "rebuild", 00:36:25.392 "target": "spare", 00:36:25.392 "progress": { 00:36:25.392 "blocks": 22528, 00:36:25.392 "percent": 17 00:36:25.392 } 00:36:25.392 }, 00:36:25.392 "base_bdevs_list": [ 00:36:25.392 { 00:36:25.392 "name": "spare", 00:36:25.392 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:25.392 "is_configured": true, 00:36:25.392 "data_offset": 0, 00:36:25.392 "data_size": 65536 00:36:25.392 }, 00:36:25.392 { 00:36:25.392 "name": "BaseBdev2", 00:36:25.392 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:25.392 "is_configured": true, 00:36:25.392 "data_offset": 0, 00:36:25.392 "data_size": 65536 00:36:25.392 }, 00:36:25.392 { 00:36:25.392 "name": "BaseBdev3", 00:36:25.393 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:25.393 "is_configured": true, 00:36:25.393 "data_offset": 0, 00:36:25.393 "data_size": 65536 00:36:25.393 } 00:36:25.393 ] 00:36:25.393 }' 00:36:25.393 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:25.393 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:25.393 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:25.393 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:25.393 14:05:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:26.328 "name": "raid_bdev1", 00:36:26.328 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:26.328 "strip_size_kb": 64, 00:36:26.328 "state": "online", 00:36:26.328 "raid_level": "raid5f", 00:36:26.328 "superblock": false, 00:36:26.328 "num_base_bdevs": 3, 00:36:26.328 "num_base_bdevs_discovered": 3, 00:36:26.328 "num_base_bdevs_operational": 3, 00:36:26.328 "process": { 00:36:26.328 "type": "rebuild", 00:36:26.328 "target": "spare", 00:36:26.328 "progress": { 00:36:26.328 "blocks": 45056, 00:36:26.328 "percent": 34 00:36:26.328 } 00:36:26.328 }, 00:36:26.328 "base_bdevs_list": [ 00:36:26.328 { 00:36:26.328 "name": "spare", 00:36:26.328 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:26.328 "is_configured": true, 00:36:26.328 "data_offset": 0, 00:36:26.328 "data_size": 65536 00:36:26.328 }, 00:36:26.328 { 00:36:26.328 "name": "BaseBdev2", 00:36:26.328 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:26.328 "is_configured": true, 00:36:26.328 "data_offset": 0, 00:36:26.328 "data_size": 65536 00:36:26.328 }, 00:36:26.328 { 00:36:26.328 "name": "BaseBdev3", 00:36:26.328 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:26.328 "is_configured": true, 00:36:26.328 "data_offset": 0, 00:36:26.328 "data_size": 65536 00:36:26.328 } 00:36:26.328 ] 00:36:26.328 }' 00:36:26.328 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:26.587 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:26.587 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:26.587 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:26.587 14:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:27.524 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:27.524 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:27.525 "name": "raid_bdev1", 00:36:27.525 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:27.525 "strip_size_kb": 64, 00:36:27.525 "state": "online", 00:36:27.525 "raid_level": "raid5f", 00:36:27.525 "superblock": false, 00:36:27.525 "num_base_bdevs": 3, 00:36:27.525 "num_base_bdevs_discovered": 3, 00:36:27.525 "num_base_bdevs_operational": 3, 00:36:27.525 "process": { 00:36:27.525 "type": "rebuild", 00:36:27.525 "target": "spare", 00:36:27.525 "progress": { 00:36:27.525 "blocks": 69632, 00:36:27.525 "percent": 53 00:36:27.525 } 00:36:27.525 }, 00:36:27.525 "base_bdevs_list": [ 00:36:27.525 { 00:36:27.525 "name": "spare", 00:36:27.525 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:27.525 "is_configured": true, 00:36:27.525 "data_offset": 0, 00:36:27.525 "data_size": 65536 00:36:27.525 }, 00:36:27.525 { 00:36:27.525 "name": "BaseBdev2", 00:36:27.525 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:27.525 "is_configured": true, 00:36:27.525 "data_offset": 0, 00:36:27.525 "data_size": 65536 00:36:27.525 }, 00:36:27.525 { 00:36:27.525 "name": "BaseBdev3", 00:36:27.525 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:27.525 "is_configured": true, 00:36:27.525 "data_offset": 0, 00:36:27.525 "data_size": 65536 00:36:27.525 } 00:36:27.525 ] 00:36:27.525 }' 00:36:27.525 14:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:27.525 14:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:27.525 14:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:27.784 14:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:27.784 14:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:28.720 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:28.720 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:28.721 "name": "raid_bdev1", 00:36:28.721 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:28.721 "strip_size_kb": 64, 00:36:28.721 "state": "online", 00:36:28.721 "raid_level": "raid5f", 00:36:28.721 "superblock": false, 00:36:28.721 "num_base_bdevs": 3, 00:36:28.721 "num_base_bdevs_discovered": 3, 00:36:28.721 "num_base_bdevs_operational": 3, 00:36:28.721 "process": { 00:36:28.721 "type": "rebuild", 00:36:28.721 "target": "spare", 00:36:28.721 "progress": { 00:36:28.721 "blocks": 92160, 00:36:28.721 "percent": 70 00:36:28.721 } 00:36:28.721 }, 00:36:28.721 "base_bdevs_list": [ 00:36:28.721 { 00:36:28.721 "name": "spare", 00:36:28.721 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:28.721 "is_configured": true, 00:36:28.721 "data_offset": 0, 00:36:28.721 "data_size": 65536 00:36:28.721 }, 00:36:28.721 { 00:36:28.721 "name": "BaseBdev2", 00:36:28.721 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:28.721 "is_configured": true, 00:36:28.721 "data_offset": 0, 00:36:28.721 "data_size": 65536 00:36:28.721 }, 00:36:28.721 { 00:36:28.721 "name": "BaseBdev3", 00:36:28.721 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:28.721 "is_configured": true, 00:36:28.721 "data_offset": 0, 00:36:28.721 "data_size": 65536 00:36:28.721 } 00:36:28.721 ] 00:36:28.721 }' 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:28.721 14:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:30.098 "name": "raid_bdev1", 00:36:30.098 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:30.098 "strip_size_kb": 64, 00:36:30.098 "state": "online", 00:36:30.098 "raid_level": "raid5f", 00:36:30.098 "superblock": false, 00:36:30.098 "num_base_bdevs": 3, 00:36:30.098 "num_base_bdevs_discovered": 3, 00:36:30.098 "num_base_bdevs_operational": 3, 00:36:30.098 "process": { 00:36:30.098 "type": "rebuild", 00:36:30.098 "target": "spare", 00:36:30.098 "progress": { 00:36:30.098 "blocks": 114688, 00:36:30.098 "percent": 87 00:36:30.098 } 00:36:30.098 }, 00:36:30.098 "base_bdevs_list": [ 00:36:30.098 { 00:36:30.098 "name": "spare", 00:36:30.098 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:30.098 "is_configured": true, 00:36:30.098 "data_offset": 0, 00:36:30.098 "data_size": 65536 00:36:30.098 }, 00:36:30.098 { 00:36:30.098 "name": "BaseBdev2", 00:36:30.098 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:30.098 "is_configured": true, 00:36:30.098 "data_offset": 0, 00:36:30.098 "data_size": 65536 00:36:30.098 }, 00:36:30.098 { 00:36:30.098 "name": "BaseBdev3", 00:36:30.098 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:30.098 "is_configured": true, 00:36:30.098 "data_offset": 0, 00:36:30.098 "data_size": 65536 00:36:30.098 } 00:36:30.098 ] 00:36:30.098 }' 00:36:30.098 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:30.099 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:30.099 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:30.099 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:30.099 14:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:30.666 [2024-10-09 14:05:36.972346] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:30.666 [2024-10-09 14:05:36.972541] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:30.666 [2024-10-09 14:05:36.972609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:30.925 "name": "raid_bdev1", 00:36:30.925 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:30.925 "strip_size_kb": 64, 00:36:30.925 "state": "online", 00:36:30.925 "raid_level": "raid5f", 00:36:30.925 "superblock": false, 00:36:30.925 "num_base_bdevs": 3, 00:36:30.925 "num_base_bdevs_discovered": 3, 00:36:30.925 "num_base_bdevs_operational": 3, 00:36:30.925 "base_bdevs_list": [ 00:36:30.925 { 00:36:30.925 "name": "spare", 00:36:30.925 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:30.925 "is_configured": true, 00:36:30.925 "data_offset": 0, 00:36:30.925 "data_size": 65536 00:36:30.925 }, 00:36:30.925 { 00:36:30.925 "name": "BaseBdev2", 00:36:30.925 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:30.925 "is_configured": true, 00:36:30.925 "data_offset": 0, 00:36:30.925 "data_size": 65536 00:36:30.925 }, 00:36:30.925 { 00:36:30.925 "name": "BaseBdev3", 00:36:30.925 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:30.925 "is_configured": true, 00:36:30.925 "data_offset": 0, 00:36:30.925 "data_size": 65536 00:36:30.925 } 00:36:30.925 ] 00:36:30.925 }' 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:30.925 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:31.187 "name": "raid_bdev1", 00:36:31.187 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:31.187 "strip_size_kb": 64, 00:36:31.187 "state": "online", 00:36:31.187 "raid_level": "raid5f", 00:36:31.187 "superblock": false, 00:36:31.187 "num_base_bdevs": 3, 00:36:31.187 "num_base_bdevs_discovered": 3, 00:36:31.187 "num_base_bdevs_operational": 3, 00:36:31.187 "base_bdevs_list": [ 00:36:31.187 { 00:36:31.187 "name": "spare", 00:36:31.187 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 }, 00:36:31.187 { 00:36:31.187 "name": "BaseBdev2", 00:36:31.187 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 }, 00:36:31.187 { 00:36:31.187 "name": "BaseBdev3", 00:36:31.187 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 } 00:36:31.187 ] 00:36:31.187 }' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:31.187 "name": "raid_bdev1", 00:36:31.187 "uuid": "a6590f7f-8edc-49ed-8acc-abc06bc7b442", 00:36:31.187 "strip_size_kb": 64, 00:36:31.187 "state": "online", 00:36:31.187 "raid_level": "raid5f", 00:36:31.187 "superblock": false, 00:36:31.187 "num_base_bdevs": 3, 00:36:31.187 "num_base_bdevs_discovered": 3, 00:36:31.187 "num_base_bdevs_operational": 3, 00:36:31.187 "base_bdevs_list": [ 00:36:31.187 { 00:36:31.187 "name": "spare", 00:36:31.187 "uuid": "83a6f778-3386-5c1c-b20d-f518e9ded194", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 }, 00:36:31.187 { 00:36:31.187 "name": "BaseBdev2", 00:36:31.187 "uuid": "3ef17757-e7c2-55af-9f6f-d79631c7b323", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 }, 00:36:31.187 { 00:36:31.187 "name": "BaseBdev3", 00:36:31.187 "uuid": "e3d60836-73ce-5090-bf36-cf0478a64381", 00:36:31.187 "is_configured": true, 00:36:31.187 "data_offset": 0, 00:36:31.187 "data_size": 65536 00:36:31.187 } 00:36:31.187 ] 00:36:31.187 }' 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:31.187 14:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.795 [2024-10-09 14:05:38.109802] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:31.795 [2024-10-09 14:05:38.109837] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:31.795 [2024-10-09 14:05:38.109927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:31.795 [2024-10-09 14:05:38.110009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:31.795 [2024-10-09 14:05:38.110021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:31.795 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:32.055 /dev/nbd0 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:32.055 1+0 records in 00:36:32.055 1+0 records out 00:36:32.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214735 s, 19.1 MB/s 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:32.055 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:36:32.314 /dev/nbd1 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:32.314 1+0 records in 00:36:32.314 1+0 records out 00:36:32.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291691 s, 14.0 MB/s 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:32.314 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:32.573 14:05:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92509 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92509 ']' 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92509 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:32.832 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92509 00:36:33.090 killing process with pid 92509 00:36:33.090 Received shutdown signal, test time was about 60.000000 seconds 00:36:33.090 00:36:33.090 Latency(us) 00:36:33.090 [2024-10-09T14:05:39.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.090 [2024-10-09T14:05:39.641Z] =================================================================================================================== 00:36:33.090 [2024-10-09T14:05:39.641Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:33.090 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:33.090 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:33.090 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92509' 00:36:33.090 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92509 00:36:33.090 [2024-10-09 14:05:39.397652] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:33.090 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92509 00:36:33.090 [2024-10-09 14:05:39.437681] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:36:33.349 00:36:33.349 real 0m14.126s 00:36:33.349 user 0m17.832s 00:36:33.349 sys 0m2.252s 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.349 ************************************ 00:36:33.349 END TEST raid5f_rebuild_test 00:36:33.349 ************************************ 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:36:33.349 14:05:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:36:33.349 14:05:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:36:33.349 14:05:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:33.349 14:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:33.349 ************************************ 00:36:33.349 START TEST raid5f_rebuild_test_sb 00:36:33.349 ************************************ 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:33.349 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92938 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92938 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92938 ']' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:33.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:33.350 14:05:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:33.350 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:33.350 Zero copy mechanism will not be used. 00:36:33.350 [2024-10-09 14:05:39.855690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:33.350 [2024-10-09 14:05:39.855880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92938 ] 00:36:33.608 [2024-10-09 14:05:40.036571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.608 [2024-10-09 14:05:40.081010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.608 [2024-10-09 14:05:40.123993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:33.608 [2024-10-09 14:05:40.124026] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.176 BaseBdev1_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.176 [2024-10-09 14:05:40.655769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:34.176 [2024-10-09 14:05:40.655839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.176 [2024-10-09 14:05:40.655890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:34.176 [2024-10-09 14:05:40.655915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.176 [2024-10-09 14:05:40.658406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.176 [2024-10-09 14:05:40.658571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:34.176 BaseBdev1 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.176 BaseBdev2_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.176 [2024-10-09 14:05:40.696501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:34.176 [2024-10-09 14:05:40.696724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.176 [2024-10-09 14:05:40.696765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:34.176 [2024-10-09 14:05:40.696781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.176 [2024-10-09 14:05:40.699895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.176 [2024-10-09 14:05:40.700069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:34.176 BaseBdev2 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.176 BaseBdev3_malloc 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.176 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.435 [2024-10-09 14:05:40.725697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:36:34.436 [2024-10-09 14:05:40.725747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.436 [2024-10-09 14:05:40.725776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:34.436 [2024-10-09 14:05:40.725788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.436 [2024-10-09 14:05:40.728171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.436 [2024-10-09 14:05:40.728209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:36:34.436 BaseBdev3 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.436 spare_malloc 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.436 spare_delay 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.436 [2024-10-09 14:05:40.758725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:34.436 [2024-10-09 14:05:40.758878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.436 [2024-10-09 14:05:40.758916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:34.436 [2024-10-09 14:05:40.758928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.436 [2024-10-09 14:05:40.761363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.436 [2024-10-09 14:05:40.761401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:34.436 spare 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.436 [2024-10-09 14:05:40.770803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:34.436 [2024-10-09 14:05:40.772928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:34.436 [2024-10-09 14:05:40.772995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:34.436 [2024-10-09 14:05:40.773138] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:36:34.436 [2024-10-09 14:05:40.773154] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:34.436 [2024-10-09 14:05:40.773412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:34.436 [2024-10-09 14:05:40.773836] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:36:34.436 [2024-10-09 14:05:40.773899] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:36:34.436 [2024-10-09 14:05:40.774025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:34.436 "name": "raid_bdev1", 00:36:34.436 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:34.436 "strip_size_kb": 64, 00:36:34.436 "state": "online", 00:36:34.436 "raid_level": "raid5f", 00:36:34.436 "superblock": true, 00:36:34.436 "num_base_bdevs": 3, 00:36:34.436 "num_base_bdevs_discovered": 3, 00:36:34.436 "num_base_bdevs_operational": 3, 00:36:34.436 "base_bdevs_list": [ 00:36:34.436 { 00:36:34.436 "name": "BaseBdev1", 00:36:34.436 "uuid": "c1e159fb-fd1d-5468-adc8-e1b439dba8bf", 00:36:34.436 "is_configured": true, 00:36:34.436 "data_offset": 2048, 00:36:34.436 "data_size": 63488 00:36:34.436 }, 00:36:34.436 { 00:36:34.436 "name": "BaseBdev2", 00:36:34.436 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:34.436 "is_configured": true, 00:36:34.436 "data_offset": 2048, 00:36:34.436 "data_size": 63488 00:36:34.436 }, 00:36:34.436 { 00:36:34.436 "name": "BaseBdev3", 00:36:34.436 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:34.436 "is_configured": true, 00:36:34.436 "data_offset": 2048, 00:36:34.436 "data_size": 63488 00:36:34.436 } 00:36:34.436 ] 00:36:34.436 }' 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:34.436 14:05:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.694 [2024-10-09 14:05:41.207789] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.694 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:34.952 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:35.211 [2024-10-09 14:05:41.543765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:35.211 /dev/nbd0 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:36:35.211 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:35.212 1+0 records in 00:36:35.212 1+0 records out 00:36:35.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022108 s, 18.5 MB/s 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:36:35.212 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:36:35.471 496+0 records in 00:36:35.471 496+0 records out 00:36:35.471 65011712 bytes (65 MB, 62 MiB) copied, 0.351277 s, 185 MB/s 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.471 14:05:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:35.730 [2024-10-09 14:05:42.236785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.730 [2024-10-09 14:05:42.254101] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:35.730 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.989 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:35.989 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:35.989 "name": "raid_bdev1", 00:36:35.989 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:35.989 "strip_size_kb": 64, 00:36:35.989 "state": "online", 00:36:35.989 "raid_level": "raid5f", 00:36:35.989 "superblock": true, 00:36:35.989 "num_base_bdevs": 3, 00:36:35.989 "num_base_bdevs_discovered": 2, 00:36:35.989 "num_base_bdevs_operational": 2, 00:36:35.989 "base_bdevs_list": [ 00:36:35.989 { 00:36:35.989 "name": null, 00:36:35.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.989 "is_configured": false, 00:36:35.989 "data_offset": 0, 00:36:35.989 "data_size": 63488 00:36:35.989 }, 00:36:35.989 { 00:36:35.989 "name": "BaseBdev2", 00:36:35.989 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:35.989 "is_configured": true, 00:36:35.989 "data_offset": 2048, 00:36:35.989 "data_size": 63488 00:36:35.989 }, 00:36:35.989 { 00:36:35.989 "name": "BaseBdev3", 00:36:35.989 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:35.989 "is_configured": true, 00:36:35.989 "data_offset": 2048, 00:36:35.989 "data_size": 63488 00:36:35.989 } 00:36:35.989 ] 00:36:35.989 }' 00:36:35.989 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:35.989 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.248 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:36.248 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:36.248 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:36.248 [2024-10-09 14:05:42.662185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:36.248 [2024-10-09 14:05:42.666000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:36:36.248 [2024-10-09 14:05:42.668662] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:36.248 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:36.248 14:05:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:37.183 "name": "raid_bdev1", 00:36:37.183 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:37.183 "strip_size_kb": 64, 00:36:37.183 "state": "online", 00:36:37.183 "raid_level": "raid5f", 00:36:37.183 "superblock": true, 00:36:37.183 "num_base_bdevs": 3, 00:36:37.183 "num_base_bdevs_discovered": 3, 00:36:37.183 "num_base_bdevs_operational": 3, 00:36:37.183 "process": { 00:36:37.183 "type": "rebuild", 00:36:37.183 "target": "spare", 00:36:37.183 "progress": { 00:36:37.183 "blocks": 20480, 00:36:37.183 "percent": 16 00:36:37.183 } 00:36:37.183 }, 00:36:37.183 "base_bdevs_list": [ 00:36:37.183 { 00:36:37.183 "name": "spare", 00:36:37.183 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:37.183 "is_configured": true, 00:36:37.183 "data_offset": 2048, 00:36:37.183 "data_size": 63488 00:36:37.183 }, 00:36:37.183 { 00:36:37.183 "name": "BaseBdev2", 00:36:37.183 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:37.183 "is_configured": true, 00:36:37.183 "data_offset": 2048, 00:36:37.183 "data_size": 63488 00:36:37.183 }, 00:36:37.183 { 00:36:37.183 "name": "BaseBdev3", 00:36:37.183 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:37.183 "is_configured": true, 00:36:37.183 "data_offset": 2048, 00:36:37.183 "data_size": 63488 00:36:37.183 } 00:36:37.183 ] 00:36:37.183 }' 00:36:37.183 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.442 [2024-10-09 14:05:43.825490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.442 [2024-10-09 14:05:43.878473] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:37.442 [2024-10-09 14:05:43.878537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.442 [2024-10-09 14:05:43.878564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.442 [2024-10-09 14:05:43.878580] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.442 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:37.442 "name": "raid_bdev1", 00:36:37.442 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:37.442 "strip_size_kb": 64, 00:36:37.442 "state": "online", 00:36:37.442 "raid_level": "raid5f", 00:36:37.442 "superblock": true, 00:36:37.442 "num_base_bdevs": 3, 00:36:37.442 "num_base_bdevs_discovered": 2, 00:36:37.442 "num_base_bdevs_operational": 2, 00:36:37.442 "base_bdevs_list": [ 00:36:37.442 { 00:36:37.442 "name": null, 00:36:37.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.442 "is_configured": false, 00:36:37.442 "data_offset": 0, 00:36:37.442 "data_size": 63488 00:36:37.442 }, 00:36:37.442 { 00:36:37.442 "name": "BaseBdev2", 00:36:37.442 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:37.442 "is_configured": true, 00:36:37.442 "data_offset": 2048, 00:36:37.442 "data_size": 63488 00:36:37.442 }, 00:36:37.442 { 00:36:37.442 "name": "BaseBdev3", 00:36:37.442 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:37.442 "is_configured": true, 00:36:37.443 "data_offset": 2048, 00:36:37.443 "data_size": 63488 00:36:37.443 } 00:36:37.443 ] 00:36:37.443 }' 00:36:37.443 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:37.443 14:05:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:38.010 "name": "raid_bdev1", 00:36:38.010 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:38.010 "strip_size_kb": 64, 00:36:38.010 "state": "online", 00:36:38.010 "raid_level": "raid5f", 00:36:38.010 "superblock": true, 00:36:38.010 "num_base_bdevs": 3, 00:36:38.010 "num_base_bdevs_discovered": 2, 00:36:38.010 "num_base_bdevs_operational": 2, 00:36:38.010 "base_bdevs_list": [ 00:36:38.010 { 00:36:38.010 "name": null, 00:36:38.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.010 "is_configured": false, 00:36:38.010 "data_offset": 0, 00:36:38.010 "data_size": 63488 00:36:38.010 }, 00:36:38.010 { 00:36:38.010 "name": "BaseBdev2", 00:36:38.010 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:38.010 "is_configured": true, 00:36:38.010 "data_offset": 2048, 00:36:38.010 "data_size": 63488 00:36:38.010 }, 00:36:38.010 { 00:36:38.010 "name": "BaseBdev3", 00:36:38.010 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:38.010 "is_configured": true, 00:36:38.010 "data_offset": 2048, 00:36:38.010 "data_size": 63488 00:36:38.010 } 00:36:38.010 ] 00:36:38.010 }' 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:38.010 [2024-10-09 14:05:44.456523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:38.010 [2024-10-09 14:05:44.460233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:36:38.010 [2024-10-09 14:05:44.462708] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.010 14:05:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.947 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:39.205 "name": "raid_bdev1", 00:36:39.205 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:39.205 "strip_size_kb": 64, 00:36:39.205 "state": "online", 00:36:39.205 "raid_level": "raid5f", 00:36:39.205 "superblock": true, 00:36:39.205 "num_base_bdevs": 3, 00:36:39.205 "num_base_bdevs_discovered": 3, 00:36:39.205 "num_base_bdevs_operational": 3, 00:36:39.205 "process": { 00:36:39.205 "type": "rebuild", 00:36:39.205 "target": "spare", 00:36:39.205 "progress": { 00:36:39.205 "blocks": 20480, 00:36:39.205 "percent": 16 00:36:39.205 } 00:36:39.205 }, 00:36:39.205 "base_bdevs_list": [ 00:36:39.205 { 00:36:39.205 "name": "spare", 00:36:39.205 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:39.205 "is_configured": true, 00:36:39.205 "data_offset": 2048, 00:36:39.205 "data_size": 63488 00:36:39.205 }, 00:36:39.205 { 00:36:39.205 "name": "BaseBdev2", 00:36:39.205 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:39.205 "is_configured": true, 00:36:39.205 "data_offset": 2048, 00:36:39.205 "data_size": 63488 00:36:39.205 }, 00:36:39.205 { 00:36:39.205 "name": "BaseBdev3", 00:36:39.205 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:39.205 "is_configured": true, 00:36:39.205 "data_offset": 2048, 00:36:39.205 "data_size": 63488 00:36:39.205 } 00:36:39.205 ] 00:36:39.205 }' 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:36:39.205 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:39.205 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:39.206 "name": "raid_bdev1", 00:36:39.206 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:39.206 "strip_size_kb": 64, 00:36:39.206 "state": "online", 00:36:39.206 "raid_level": "raid5f", 00:36:39.206 "superblock": true, 00:36:39.206 "num_base_bdevs": 3, 00:36:39.206 "num_base_bdevs_discovered": 3, 00:36:39.206 "num_base_bdevs_operational": 3, 00:36:39.206 "process": { 00:36:39.206 "type": "rebuild", 00:36:39.206 "target": "spare", 00:36:39.206 "progress": { 00:36:39.206 "blocks": 22528, 00:36:39.206 "percent": 17 00:36:39.206 } 00:36:39.206 }, 00:36:39.206 "base_bdevs_list": [ 00:36:39.206 { 00:36:39.206 "name": "spare", 00:36:39.206 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:39.206 "is_configured": true, 00:36:39.206 "data_offset": 2048, 00:36:39.206 "data_size": 63488 00:36:39.206 }, 00:36:39.206 { 00:36:39.206 "name": "BaseBdev2", 00:36:39.206 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:39.206 "is_configured": true, 00:36:39.206 "data_offset": 2048, 00:36:39.206 "data_size": 63488 00:36:39.206 }, 00:36:39.206 { 00:36:39.206 "name": "BaseBdev3", 00:36:39.206 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:39.206 "is_configured": true, 00:36:39.206 "data_offset": 2048, 00:36:39.206 "data_size": 63488 00:36:39.206 } 00:36:39.206 ] 00:36:39.206 }' 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:39.206 14:05:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:40.582 "name": "raid_bdev1", 00:36:40.582 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:40.582 "strip_size_kb": 64, 00:36:40.582 "state": "online", 00:36:40.582 "raid_level": "raid5f", 00:36:40.582 "superblock": true, 00:36:40.582 "num_base_bdevs": 3, 00:36:40.582 "num_base_bdevs_discovered": 3, 00:36:40.582 "num_base_bdevs_operational": 3, 00:36:40.582 "process": { 00:36:40.582 "type": "rebuild", 00:36:40.582 "target": "spare", 00:36:40.582 "progress": { 00:36:40.582 "blocks": 45056, 00:36:40.582 "percent": 35 00:36:40.582 } 00:36:40.582 }, 00:36:40.582 "base_bdevs_list": [ 00:36:40.582 { 00:36:40.582 "name": "spare", 00:36:40.582 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:40.582 "is_configured": true, 00:36:40.582 "data_offset": 2048, 00:36:40.582 "data_size": 63488 00:36:40.582 }, 00:36:40.582 { 00:36:40.582 "name": "BaseBdev2", 00:36:40.582 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:40.582 "is_configured": true, 00:36:40.582 "data_offset": 2048, 00:36:40.582 "data_size": 63488 00:36:40.582 }, 00:36:40.582 { 00:36:40.582 "name": "BaseBdev3", 00:36:40.582 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:40.582 "is_configured": true, 00:36:40.582 "data_offset": 2048, 00:36:40.582 "data_size": 63488 00:36:40.582 } 00:36:40.582 ] 00:36:40.582 }' 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:40.582 14:05:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:41.515 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:41.515 "name": "raid_bdev1", 00:36:41.515 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:41.515 "strip_size_kb": 64, 00:36:41.515 "state": "online", 00:36:41.515 "raid_level": "raid5f", 00:36:41.515 "superblock": true, 00:36:41.515 "num_base_bdevs": 3, 00:36:41.515 "num_base_bdevs_discovered": 3, 00:36:41.516 "num_base_bdevs_operational": 3, 00:36:41.516 "process": { 00:36:41.516 "type": "rebuild", 00:36:41.516 "target": "spare", 00:36:41.516 "progress": { 00:36:41.516 "blocks": 69632, 00:36:41.516 "percent": 54 00:36:41.516 } 00:36:41.516 }, 00:36:41.516 "base_bdevs_list": [ 00:36:41.516 { 00:36:41.516 "name": "spare", 00:36:41.516 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:41.516 "is_configured": true, 00:36:41.516 "data_offset": 2048, 00:36:41.516 "data_size": 63488 00:36:41.516 }, 00:36:41.516 { 00:36:41.516 "name": "BaseBdev2", 00:36:41.516 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:41.516 "is_configured": true, 00:36:41.516 "data_offset": 2048, 00:36:41.516 "data_size": 63488 00:36:41.516 }, 00:36:41.516 { 00:36:41.516 "name": "BaseBdev3", 00:36:41.516 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:41.516 "is_configured": true, 00:36:41.516 "data_offset": 2048, 00:36:41.516 "data_size": 63488 00:36:41.516 } 00:36:41.516 ] 00:36:41.516 }' 00:36:41.516 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:41.516 14:05:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:41.516 14:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:41.516 14:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:41.516 14:05:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:42.892 "name": "raid_bdev1", 00:36:42.892 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:42.892 "strip_size_kb": 64, 00:36:42.892 "state": "online", 00:36:42.892 "raid_level": "raid5f", 00:36:42.892 "superblock": true, 00:36:42.892 "num_base_bdevs": 3, 00:36:42.892 "num_base_bdevs_discovered": 3, 00:36:42.892 "num_base_bdevs_operational": 3, 00:36:42.892 "process": { 00:36:42.892 "type": "rebuild", 00:36:42.892 "target": "spare", 00:36:42.892 "progress": { 00:36:42.892 "blocks": 92160, 00:36:42.892 "percent": 72 00:36:42.892 } 00:36:42.892 }, 00:36:42.892 "base_bdevs_list": [ 00:36:42.892 { 00:36:42.892 "name": "spare", 00:36:42.892 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:42.892 "is_configured": true, 00:36:42.892 "data_offset": 2048, 00:36:42.892 "data_size": 63488 00:36:42.892 }, 00:36:42.892 { 00:36:42.892 "name": "BaseBdev2", 00:36:42.892 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:42.892 "is_configured": true, 00:36:42.892 "data_offset": 2048, 00:36:42.892 "data_size": 63488 00:36:42.892 }, 00:36:42.892 { 00:36:42.892 "name": "BaseBdev3", 00:36:42.892 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:42.892 "is_configured": true, 00:36:42.892 "data_offset": 2048, 00:36:42.892 "data_size": 63488 00:36:42.892 } 00:36:42.892 ] 00:36:42.892 }' 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:42.892 14:05:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:43.826 "name": "raid_bdev1", 00:36:43.826 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:43.826 "strip_size_kb": 64, 00:36:43.826 "state": "online", 00:36:43.826 "raid_level": "raid5f", 00:36:43.826 "superblock": true, 00:36:43.826 "num_base_bdevs": 3, 00:36:43.826 "num_base_bdevs_discovered": 3, 00:36:43.826 "num_base_bdevs_operational": 3, 00:36:43.826 "process": { 00:36:43.826 "type": "rebuild", 00:36:43.826 "target": "spare", 00:36:43.826 "progress": { 00:36:43.826 "blocks": 116736, 00:36:43.826 "percent": 91 00:36:43.826 } 00:36:43.826 }, 00:36:43.826 "base_bdevs_list": [ 00:36:43.826 { 00:36:43.826 "name": "spare", 00:36:43.826 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:43.826 "is_configured": true, 00:36:43.826 "data_offset": 2048, 00:36:43.826 "data_size": 63488 00:36:43.826 }, 00:36:43.826 { 00:36:43.826 "name": "BaseBdev2", 00:36:43.826 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:43.826 "is_configured": true, 00:36:43.826 "data_offset": 2048, 00:36:43.826 "data_size": 63488 00:36:43.826 }, 00:36:43.826 { 00:36:43.826 "name": "BaseBdev3", 00:36:43.826 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:43.826 "is_configured": true, 00:36:43.826 "data_offset": 2048, 00:36:43.826 "data_size": 63488 00:36:43.826 } 00:36:43.826 ] 00:36:43.826 }' 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:43.826 14:05:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:44.392 [2024-10-09 14:05:50.711995] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:44.392 [2024-10-09 14:05:50.712070] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:44.392 [2024-10-09 14:05:50.712176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:44.960 "name": "raid_bdev1", 00:36:44.960 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:44.960 "strip_size_kb": 64, 00:36:44.960 "state": "online", 00:36:44.960 "raid_level": "raid5f", 00:36:44.960 "superblock": true, 00:36:44.960 "num_base_bdevs": 3, 00:36:44.960 "num_base_bdevs_discovered": 3, 00:36:44.960 "num_base_bdevs_operational": 3, 00:36:44.960 "base_bdevs_list": [ 00:36:44.960 { 00:36:44.960 "name": "spare", 00:36:44.960 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:44.960 "is_configured": true, 00:36:44.960 "data_offset": 2048, 00:36:44.960 "data_size": 63488 00:36:44.960 }, 00:36:44.960 { 00:36:44.960 "name": "BaseBdev2", 00:36:44.960 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:44.960 "is_configured": true, 00:36:44.960 "data_offset": 2048, 00:36:44.960 "data_size": 63488 00:36:44.960 }, 00:36:44.960 { 00:36:44.960 "name": "BaseBdev3", 00:36:44.960 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:44.960 "is_configured": true, 00:36:44.960 "data_offset": 2048, 00:36:44.960 "data_size": 63488 00:36:44.960 } 00:36:44.960 ] 00:36:44.960 }' 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:44.960 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:45.219 "name": "raid_bdev1", 00:36:45.219 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:45.219 "strip_size_kb": 64, 00:36:45.219 "state": "online", 00:36:45.219 "raid_level": "raid5f", 00:36:45.219 "superblock": true, 00:36:45.219 "num_base_bdevs": 3, 00:36:45.219 "num_base_bdevs_discovered": 3, 00:36:45.219 "num_base_bdevs_operational": 3, 00:36:45.219 "base_bdevs_list": [ 00:36:45.219 { 00:36:45.219 "name": "spare", 00:36:45.219 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 }, 00:36:45.219 { 00:36:45.219 "name": "BaseBdev2", 00:36:45.219 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 }, 00:36:45.219 { 00:36:45.219 "name": "BaseBdev3", 00:36:45.219 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 } 00:36:45.219 ] 00:36:45.219 }' 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:45.219 "name": "raid_bdev1", 00:36:45.219 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:45.219 "strip_size_kb": 64, 00:36:45.219 "state": "online", 00:36:45.219 "raid_level": "raid5f", 00:36:45.219 "superblock": true, 00:36:45.219 "num_base_bdevs": 3, 00:36:45.219 "num_base_bdevs_discovered": 3, 00:36:45.219 "num_base_bdevs_operational": 3, 00:36:45.219 "base_bdevs_list": [ 00:36:45.219 { 00:36:45.219 "name": "spare", 00:36:45.219 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 }, 00:36:45.219 { 00:36:45.219 "name": "BaseBdev2", 00:36:45.219 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 }, 00:36:45.219 { 00:36:45.219 "name": "BaseBdev3", 00:36:45.219 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:45.219 "is_configured": true, 00:36:45.219 "data_offset": 2048, 00:36:45.219 "data_size": 63488 00:36:45.219 } 00:36:45.219 ] 00:36:45.219 }' 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:45.219 14:05:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.786 [2024-10-09 14:05:52.089425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:45.786 [2024-10-09 14:05:52.089579] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:45.786 [2024-10-09 14:05:52.089690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:45.786 [2024-10-09 14:05:52.089777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:45.786 [2024-10-09 14:05:52.089793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:45.786 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:46.045 /dev/nbd0 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:46.045 1+0 records in 00:36:46.045 1+0 records out 00:36:46.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063175 s, 6.5 MB/s 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:46.045 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:36:46.304 /dev/nbd1 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:46.304 1+0 records in 00:36:46.304 1+0 records out 00:36:46.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040616 s, 10.1 MB/s 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:46.304 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:46.564 14:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:46.564 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:46.822 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:47.080 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:47.080 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:47.080 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:36:47.080 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.081 [2024-10-09 14:05:53.390139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:47.081 [2024-10-09 14:05:53.390317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:47.081 [2024-10-09 14:05:53.390353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:47.081 [2024-10-09 14:05:53.390366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:47.081 [2024-10-09 14:05:53.392901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:47.081 [2024-10-09 14:05:53.393044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:47.081 [2024-10-09 14:05:53.393145] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:47.081 [2024-10-09 14:05:53.393193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:47.081 [2024-10-09 14:05:53.393309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:47.081 [2024-10-09 14:05:53.393403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:47.081 spare 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.081 [2024-10-09 14:05:53.493488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:36:47.081 [2024-10-09 14:05:53.493668] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:36:47.081 [2024-10-09 14:05:53.494013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:36:47.081 [2024-10-09 14:05:53.494489] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:36:47.081 [2024-10-09 14:05:53.494617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:36:47.081 [2024-10-09 14:05:53.494873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:47.081 "name": "raid_bdev1", 00:36:47.081 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:47.081 "strip_size_kb": 64, 00:36:47.081 "state": "online", 00:36:47.081 "raid_level": "raid5f", 00:36:47.081 "superblock": true, 00:36:47.081 "num_base_bdevs": 3, 00:36:47.081 "num_base_bdevs_discovered": 3, 00:36:47.081 "num_base_bdevs_operational": 3, 00:36:47.081 "base_bdevs_list": [ 00:36:47.081 { 00:36:47.081 "name": "spare", 00:36:47.081 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:47.081 "is_configured": true, 00:36:47.081 "data_offset": 2048, 00:36:47.081 "data_size": 63488 00:36:47.081 }, 00:36:47.081 { 00:36:47.081 "name": "BaseBdev2", 00:36:47.081 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:47.081 "is_configured": true, 00:36:47.081 "data_offset": 2048, 00:36:47.081 "data_size": 63488 00:36:47.081 }, 00:36:47.081 { 00:36:47.081 "name": "BaseBdev3", 00:36:47.081 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:47.081 "is_configured": true, 00:36:47.081 "data_offset": 2048, 00:36:47.081 "data_size": 63488 00:36:47.081 } 00:36:47.081 ] 00:36:47.081 }' 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:47.081 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:47.648 "name": "raid_bdev1", 00:36:47.648 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:47.648 "strip_size_kb": 64, 00:36:47.648 "state": "online", 00:36:47.648 "raid_level": "raid5f", 00:36:47.648 "superblock": true, 00:36:47.648 "num_base_bdevs": 3, 00:36:47.648 "num_base_bdevs_discovered": 3, 00:36:47.648 "num_base_bdevs_operational": 3, 00:36:47.648 "base_bdevs_list": [ 00:36:47.648 { 00:36:47.648 "name": "spare", 00:36:47.648 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:47.648 "is_configured": true, 00:36:47.648 "data_offset": 2048, 00:36:47.648 "data_size": 63488 00:36:47.648 }, 00:36:47.648 { 00:36:47.648 "name": "BaseBdev2", 00:36:47.648 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:47.648 "is_configured": true, 00:36:47.648 "data_offset": 2048, 00:36:47.648 "data_size": 63488 00:36:47.648 }, 00:36:47.648 { 00:36:47.648 "name": "BaseBdev3", 00:36:47.648 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:47.648 "is_configured": true, 00:36:47.648 "data_offset": 2048, 00:36:47.648 "data_size": 63488 00:36:47.648 } 00:36:47.648 ] 00:36:47.648 }' 00:36:47.648 14:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.648 [2024-10-09 14:05:54.122985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.648 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:47.648 "name": "raid_bdev1", 00:36:47.648 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:47.648 "strip_size_kb": 64, 00:36:47.648 "state": "online", 00:36:47.648 "raid_level": "raid5f", 00:36:47.648 "superblock": true, 00:36:47.648 "num_base_bdevs": 3, 00:36:47.649 "num_base_bdevs_discovered": 2, 00:36:47.649 "num_base_bdevs_operational": 2, 00:36:47.649 "base_bdevs_list": [ 00:36:47.649 { 00:36:47.649 "name": null, 00:36:47.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.649 "is_configured": false, 00:36:47.649 "data_offset": 0, 00:36:47.649 "data_size": 63488 00:36:47.649 }, 00:36:47.649 { 00:36:47.649 "name": "BaseBdev2", 00:36:47.649 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:47.649 "is_configured": true, 00:36:47.649 "data_offset": 2048, 00:36:47.649 "data_size": 63488 00:36:47.649 }, 00:36:47.649 { 00:36:47.649 "name": "BaseBdev3", 00:36:47.649 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:47.649 "is_configured": true, 00:36:47.649 "data_offset": 2048, 00:36:47.649 "data_size": 63488 00:36:47.649 } 00:36:47.649 ] 00:36:47.649 }' 00:36:47.649 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:47.649 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:48.215 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:48.215 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:48.215 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:48.215 [2024-10-09 14:05:54.583112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:48.215 [2024-10-09 14:05:54.583290] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:48.215 [2024-10-09 14:05:54.583307] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:48.215 [2024-10-09 14:05:54.583353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:48.215 [2024-10-09 14:05:54.586972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:36:48.215 [2024-10-09 14:05:54.589603] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:48.215 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:48.215 14:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:49.150 "name": "raid_bdev1", 00:36:49.150 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:49.150 "strip_size_kb": 64, 00:36:49.150 "state": "online", 00:36:49.150 "raid_level": "raid5f", 00:36:49.150 "superblock": true, 00:36:49.150 "num_base_bdevs": 3, 00:36:49.150 "num_base_bdevs_discovered": 3, 00:36:49.150 "num_base_bdevs_operational": 3, 00:36:49.150 "process": { 00:36:49.150 "type": "rebuild", 00:36:49.150 "target": "spare", 00:36:49.150 "progress": { 00:36:49.150 "blocks": 20480, 00:36:49.150 "percent": 16 00:36:49.150 } 00:36:49.150 }, 00:36:49.150 "base_bdevs_list": [ 00:36:49.150 { 00:36:49.150 "name": "spare", 00:36:49.150 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:49.150 "is_configured": true, 00:36:49.150 "data_offset": 2048, 00:36:49.150 "data_size": 63488 00:36:49.150 }, 00:36:49.150 { 00:36:49.150 "name": "BaseBdev2", 00:36:49.150 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:49.150 "is_configured": true, 00:36:49.150 "data_offset": 2048, 00:36:49.150 "data_size": 63488 00:36:49.150 }, 00:36:49.150 { 00:36:49.150 "name": "BaseBdev3", 00:36:49.150 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:49.150 "is_configured": true, 00:36:49.150 "data_offset": 2048, 00:36:49.150 "data_size": 63488 00:36:49.150 } 00:36:49.150 ] 00:36:49.150 }' 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:49.150 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.409 [2024-10-09 14:05:55.746497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:49.409 [2024-10-09 14:05:55.798662] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:49.409 [2024-10-09 14:05:55.798861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:49.409 [2024-10-09 14:05:55.798982] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:49.409 [2024-10-09 14:05:55.799025] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:49.409 "name": "raid_bdev1", 00:36:49.409 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:49.409 "strip_size_kb": 64, 00:36:49.409 "state": "online", 00:36:49.409 "raid_level": "raid5f", 00:36:49.409 "superblock": true, 00:36:49.409 "num_base_bdevs": 3, 00:36:49.409 "num_base_bdevs_discovered": 2, 00:36:49.409 "num_base_bdevs_operational": 2, 00:36:49.409 "base_bdevs_list": [ 00:36:49.409 { 00:36:49.409 "name": null, 00:36:49.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.409 "is_configured": false, 00:36:49.409 "data_offset": 0, 00:36:49.409 "data_size": 63488 00:36:49.409 }, 00:36:49.409 { 00:36:49.409 "name": "BaseBdev2", 00:36:49.409 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:49.409 "is_configured": true, 00:36:49.409 "data_offset": 2048, 00:36:49.409 "data_size": 63488 00:36:49.409 }, 00:36:49.409 { 00:36:49.409 "name": "BaseBdev3", 00:36:49.409 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:49.409 "is_configured": true, 00:36:49.409 "data_offset": 2048, 00:36:49.409 "data_size": 63488 00:36:49.409 } 00:36:49.409 ] 00:36:49.409 }' 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:49.409 14:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.976 14:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:49.976 14:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:49.976 14:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:49.976 [2024-10-09 14:05:56.255973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:49.976 [2024-10-09 14:05:56.256140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:49.976 [2024-10-09 14:05:56.256201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:36:49.976 [2024-10-09 14:05:56.256279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:49.976 [2024-10-09 14:05:56.256779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:49.976 [2024-10-09 14:05:56.256902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:49.976 [2024-10-09 14:05:56.257071] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:49.976 [2024-10-09 14:05:56.257192] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:49.976 [2024-10-09 14:05:56.257297] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:49.976 [2024-10-09 14:05:56.257350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:49.976 spare 00:36:49.976 [2024-10-09 14:05:56.261014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:36:49.976 14:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.976 14:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:36:49.976 [2024-10-09 14:05:56.263489] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:50.912 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:50.912 "name": "raid_bdev1", 00:36:50.912 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:50.912 "strip_size_kb": 64, 00:36:50.912 "state": "online", 00:36:50.912 "raid_level": "raid5f", 00:36:50.912 "superblock": true, 00:36:50.912 "num_base_bdevs": 3, 00:36:50.912 "num_base_bdevs_discovered": 3, 00:36:50.912 "num_base_bdevs_operational": 3, 00:36:50.912 "process": { 00:36:50.912 "type": "rebuild", 00:36:50.912 "target": "spare", 00:36:50.912 "progress": { 00:36:50.912 "blocks": 20480, 00:36:50.912 "percent": 16 00:36:50.912 } 00:36:50.912 }, 00:36:50.912 "base_bdevs_list": [ 00:36:50.912 { 00:36:50.912 "name": "spare", 00:36:50.912 "uuid": "c89d5796-eff1-5cd3-9153-24e255faf70a", 00:36:50.912 "is_configured": true, 00:36:50.912 "data_offset": 2048, 00:36:50.912 "data_size": 63488 00:36:50.912 }, 00:36:50.912 { 00:36:50.913 "name": "BaseBdev2", 00:36:50.913 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:50.913 "is_configured": true, 00:36:50.913 "data_offset": 2048, 00:36:50.913 "data_size": 63488 00:36:50.913 }, 00:36:50.913 { 00:36:50.913 "name": "BaseBdev3", 00:36:50.913 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:50.913 "is_configured": true, 00:36:50.913 "data_offset": 2048, 00:36:50.913 "data_size": 63488 00:36:50.913 } 00:36:50.913 ] 00:36:50.913 }' 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:50.913 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:50.913 [2024-10-09 14:05:57.421138] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:51.171 [2024-10-09 14:05:57.472621] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:51.171 [2024-10-09 14:05:57.472686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.171 [2024-10-09 14:05:57.472704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:51.171 [2024-10-09 14:05:57.472718] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:51.171 "name": "raid_bdev1", 00:36:51.171 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:51.171 "strip_size_kb": 64, 00:36:51.171 "state": "online", 00:36:51.171 "raid_level": "raid5f", 00:36:51.171 "superblock": true, 00:36:51.171 "num_base_bdevs": 3, 00:36:51.171 "num_base_bdevs_discovered": 2, 00:36:51.171 "num_base_bdevs_operational": 2, 00:36:51.171 "base_bdevs_list": [ 00:36:51.171 { 00:36:51.171 "name": null, 00:36:51.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.171 "is_configured": false, 00:36:51.171 "data_offset": 0, 00:36:51.171 "data_size": 63488 00:36:51.171 }, 00:36:51.171 { 00:36:51.171 "name": "BaseBdev2", 00:36:51.171 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:51.171 "is_configured": true, 00:36:51.171 "data_offset": 2048, 00:36:51.171 "data_size": 63488 00:36:51.171 }, 00:36:51.171 { 00:36:51.171 "name": "BaseBdev3", 00:36:51.171 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:51.171 "is_configured": true, 00:36:51.171 "data_offset": 2048, 00:36:51.171 "data_size": 63488 00:36:51.171 } 00:36:51.171 ] 00:36:51.171 }' 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:51.171 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:51.430 "name": "raid_bdev1", 00:36:51.430 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:51.430 "strip_size_kb": 64, 00:36:51.430 "state": "online", 00:36:51.430 "raid_level": "raid5f", 00:36:51.430 "superblock": true, 00:36:51.430 "num_base_bdevs": 3, 00:36:51.430 "num_base_bdevs_discovered": 2, 00:36:51.430 "num_base_bdevs_operational": 2, 00:36:51.430 "base_bdevs_list": [ 00:36:51.430 { 00:36:51.430 "name": null, 00:36:51.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:51.430 "is_configured": false, 00:36:51.430 "data_offset": 0, 00:36:51.430 "data_size": 63488 00:36:51.430 }, 00:36:51.430 { 00:36:51.430 "name": "BaseBdev2", 00:36:51.430 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:51.430 "is_configured": true, 00:36:51.430 "data_offset": 2048, 00:36:51.430 "data_size": 63488 00:36:51.430 }, 00:36:51.430 { 00:36:51.430 "name": "BaseBdev3", 00:36:51.430 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:51.430 "is_configured": true, 00:36:51.430 "data_offset": 2048, 00:36:51.430 "data_size": 63488 00:36:51.430 } 00:36:51.430 ] 00:36:51.430 }' 00:36:51.430 14:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:51.688 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:51.688 [2024-10-09 14:05:58.077884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:51.688 [2024-10-09 14:05:58.078062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:51.688 [2024-10-09 14:05:58.078123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:36:51.688 [2024-10-09 14:05:58.078140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:51.688 [2024-10-09 14:05:58.078537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:51.688 [2024-10-09 14:05:58.078579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:51.688 [2024-10-09 14:05:58.078653] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:51.688 [2024-10-09 14:05:58.078671] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:51.688 [2024-10-09 14:05:58.078682] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:51.688 [2024-10-09 14:05:58.078695] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:36:51.688 BaseBdev1 00:36:51.689 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:51.689 14:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:52.626 "name": "raid_bdev1", 00:36:52.626 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:52.626 "strip_size_kb": 64, 00:36:52.626 "state": "online", 00:36:52.626 "raid_level": "raid5f", 00:36:52.626 "superblock": true, 00:36:52.626 "num_base_bdevs": 3, 00:36:52.626 "num_base_bdevs_discovered": 2, 00:36:52.626 "num_base_bdevs_operational": 2, 00:36:52.626 "base_bdevs_list": [ 00:36:52.626 { 00:36:52.626 "name": null, 00:36:52.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:52.626 "is_configured": false, 00:36:52.626 "data_offset": 0, 00:36:52.626 "data_size": 63488 00:36:52.626 }, 00:36:52.626 { 00:36:52.626 "name": "BaseBdev2", 00:36:52.626 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:52.626 "is_configured": true, 00:36:52.626 "data_offset": 2048, 00:36:52.626 "data_size": 63488 00:36:52.626 }, 00:36:52.626 { 00:36:52.626 "name": "BaseBdev3", 00:36:52.626 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:52.626 "is_configured": true, 00:36:52.626 "data_offset": 2048, 00:36:52.626 "data_size": 63488 00:36:52.626 } 00:36:52.626 ] 00:36:52.626 }' 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:52.626 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:53.195 "name": "raid_bdev1", 00:36:53.195 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:53.195 "strip_size_kb": 64, 00:36:53.195 "state": "online", 00:36:53.195 "raid_level": "raid5f", 00:36:53.195 "superblock": true, 00:36:53.195 "num_base_bdevs": 3, 00:36:53.195 "num_base_bdevs_discovered": 2, 00:36:53.195 "num_base_bdevs_operational": 2, 00:36:53.195 "base_bdevs_list": [ 00:36:53.195 { 00:36:53.195 "name": null, 00:36:53.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.195 "is_configured": false, 00:36:53.195 "data_offset": 0, 00:36:53.195 "data_size": 63488 00:36:53.195 }, 00:36:53.195 { 00:36:53.195 "name": "BaseBdev2", 00:36:53.195 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:53.195 "is_configured": true, 00:36:53.195 "data_offset": 2048, 00:36:53.195 "data_size": 63488 00:36:53.195 }, 00:36:53.195 { 00:36:53.195 "name": "BaseBdev3", 00:36:53.195 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:53.195 "is_configured": true, 00:36:53.195 "data_offset": 2048, 00:36:53.195 "data_size": 63488 00:36:53.195 } 00:36:53.195 ] 00:36:53.195 }' 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:53.195 [2024-10-09 14:05:59.694256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:53.195 [2024-10-09 14:05:59.694532] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:53.195 [2024-10-09 14:05:59.694648] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:53.195 request: 00:36:53.195 { 00:36:53.195 "base_bdev": "BaseBdev1", 00:36:53.195 "raid_bdev": "raid_bdev1", 00:36:53.195 "method": "bdev_raid_add_base_bdev", 00:36:53.195 "req_id": 1 00:36:53.195 } 00:36:53.195 Got JSON-RPC error response 00:36:53.195 response: 00:36:53.195 { 00:36:53.195 "code": -22, 00:36:53.195 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:53.195 } 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:53.195 14:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.573 "name": "raid_bdev1", 00:36:54.573 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:54.573 "strip_size_kb": 64, 00:36:54.573 "state": "online", 00:36:54.573 "raid_level": "raid5f", 00:36:54.573 "superblock": true, 00:36:54.573 "num_base_bdevs": 3, 00:36:54.573 "num_base_bdevs_discovered": 2, 00:36:54.573 "num_base_bdevs_operational": 2, 00:36:54.573 "base_bdevs_list": [ 00:36:54.573 { 00:36:54.573 "name": null, 00:36:54.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.573 "is_configured": false, 00:36:54.573 "data_offset": 0, 00:36:54.573 "data_size": 63488 00:36:54.573 }, 00:36:54.573 { 00:36:54.573 "name": "BaseBdev2", 00:36:54.573 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:54.573 "is_configured": true, 00:36:54.573 "data_offset": 2048, 00:36:54.573 "data_size": 63488 00:36:54.573 }, 00:36:54.573 { 00:36:54.573 "name": "BaseBdev3", 00:36:54.573 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:54.573 "is_configured": true, 00:36:54.573 "data_offset": 2048, 00:36:54.573 "data_size": 63488 00:36:54.573 } 00:36:54.573 ] 00:36:54.573 }' 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.573 14:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:54.832 "name": "raid_bdev1", 00:36:54.832 "uuid": "148379b1-2940-457a-8061-21194439ca00", 00:36:54.832 "strip_size_kb": 64, 00:36:54.832 "state": "online", 00:36:54.832 "raid_level": "raid5f", 00:36:54.832 "superblock": true, 00:36:54.832 "num_base_bdevs": 3, 00:36:54.832 "num_base_bdevs_discovered": 2, 00:36:54.832 "num_base_bdevs_operational": 2, 00:36:54.832 "base_bdevs_list": [ 00:36:54.832 { 00:36:54.832 "name": null, 00:36:54.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.832 "is_configured": false, 00:36:54.832 "data_offset": 0, 00:36:54.832 "data_size": 63488 00:36:54.832 }, 00:36:54.832 { 00:36:54.832 "name": "BaseBdev2", 00:36:54.832 "uuid": "75e99918-06f7-5db6-83ef-3ad52de44075", 00:36:54.832 "is_configured": true, 00:36:54.832 "data_offset": 2048, 00:36:54.832 "data_size": 63488 00:36:54.832 }, 00:36:54.832 { 00:36:54.832 "name": "BaseBdev3", 00:36:54.832 "uuid": "96684915-c4ca-5da1-b0f9-309080417167", 00:36:54.832 "is_configured": true, 00:36:54.832 "data_offset": 2048, 00:36:54.832 "data_size": 63488 00:36:54.832 } 00:36:54.832 ] 00:36:54.832 }' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92938 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92938 ']' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92938 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92938 00:36:54.832 killing process with pid 92938 00:36:54.832 Received shutdown signal, test time was about 60.000000 seconds 00:36:54.832 00:36:54.832 Latency(us) 00:36:54.832 [2024-10-09T14:06:01.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:54.832 [2024-10-09T14:06:01.383Z] =================================================================================================================== 00:36:54.832 [2024-10-09T14:06:01.383Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:54.832 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:54.833 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92938' 00:36:54.833 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92938 00:36:54.833 [2024-10-09 14:06:01.327410] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:54.833 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92938 00:36:54.833 [2024-10-09 14:06:01.327524] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:54.833 [2024-10-09 14:06:01.327620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:54.833 [2024-10-09 14:06:01.327634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:36:54.833 [2024-10-09 14:06:01.368531] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:55.091 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:36:55.091 00:36:55.091 real 0m21.864s 00:36:55.091 user 0m28.510s 00:36:55.091 sys 0m3.018s 00:36:55.091 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:55.091 14:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:55.091 ************************************ 00:36:55.091 END TEST raid5f_rebuild_test_sb 00:36:55.091 ************************************ 00:36:55.350 14:06:01 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:36:55.350 14:06:01 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:36:55.350 14:06:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:36:55.350 14:06:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:55.350 14:06:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:55.350 ************************************ 00:36:55.350 START TEST raid5f_state_function_test 00:36:55.350 ************************************ 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:55.350 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93669 00:36:55.351 Process raid pid: 93669 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93669' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93669 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93669 ']' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:55.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:55.351 14:06:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:55.351 [2024-10-09 14:06:01.789055] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:36:55.351 [2024-10-09 14:06:01.789251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.609 [2024-10-09 14:06:01.969893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.609 [2024-10-09 14:06:02.016370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.609 [2024-10-09 14:06:02.059490] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:55.609 [2024-10-09 14:06:02.059540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.556 [2024-10-09 14:06:02.754239] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:56.556 [2024-10-09 14:06:02.754288] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:56.556 [2024-10-09 14:06:02.754302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:56.556 [2024-10-09 14:06:02.754317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:56.556 [2024-10-09 14:06:02.754325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:56.556 [2024-10-09 14:06:02.754341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:56.556 [2024-10-09 14:06:02.754349] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:56.556 [2024-10-09 14:06:02.754360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.556 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:56.557 "name": "Existed_Raid", 00:36:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.557 "strip_size_kb": 64, 00:36:56.557 "state": "configuring", 00:36:56.557 "raid_level": "raid5f", 00:36:56.557 "superblock": false, 00:36:56.557 "num_base_bdevs": 4, 00:36:56.557 "num_base_bdevs_discovered": 0, 00:36:56.557 "num_base_bdevs_operational": 4, 00:36:56.557 "base_bdevs_list": [ 00:36:56.557 { 00:36:56.557 "name": "BaseBdev1", 00:36:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.557 "is_configured": false, 00:36:56.557 "data_offset": 0, 00:36:56.557 "data_size": 0 00:36:56.557 }, 00:36:56.557 { 00:36:56.557 "name": "BaseBdev2", 00:36:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.557 "is_configured": false, 00:36:56.557 "data_offset": 0, 00:36:56.557 "data_size": 0 00:36:56.557 }, 00:36:56.557 { 00:36:56.557 "name": "BaseBdev3", 00:36:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.557 "is_configured": false, 00:36:56.557 "data_offset": 0, 00:36:56.557 "data_size": 0 00:36:56.557 }, 00:36:56.557 { 00:36:56.557 "name": "BaseBdev4", 00:36:56.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.557 "is_configured": false, 00:36:56.557 "data_offset": 0, 00:36:56.557 "data_size": 0 00:36:56.557 } 00:36:56.557 ] 00:36:56.557 }' 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:56.557 14:06:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 [2024-10-09 14:06:03.202244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:56.849 [2024-10-09 14:06:03.202287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 [2024-10-09 14:06:03.214269] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:56.849 [2024-10-09 14:06:03.214313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:56.849 [2024-10-09 14:06:03.214324] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:56.849 [2024-10-09 14:06:03.214336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:56.849 [2024-10-09 14:06:03.214344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:56.849 [2024-10-09 14:06:03.214356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:56.849 [2024-10-09 14:06:03.214363] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:56.849 [2024-10-09 14:06:03.214375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 [2024-10-09 14:06:03.231618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:56.849 BaseBdev1 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.849 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.849 [ 00:36:56.849 { 00:36:56.849 "name": "BaseBdev1", 00:36:56.849 "aliases": [ 00:36:56.849 "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6" 00:36:56.849 ], 00:36:56.849 "product_name": "Malloc disk", 00:36:56.849 "block_size": 512, 00:36:56.849 "num_blocks": 65536, 00:36:56.849 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:56.849 "assigned_rate_limits": { 00:36:56.849 "rw_ios_per_sec": 0, 00:36:56.849 "rw_mbytes_per_sec": 0, 00:36:56.849 "r_mbytes_per_sec": 0, 00:36:56.849 "w_mbytes_per_sec": 0 00:36:56.849 }, 00:36:56.849 "claimed": true, 00:36:56.849 "claim_type": "exclusive_write", 00:36:56.849 "zoned": false, 00:36:56.849 "supported_io_types": { 00:36:56.849 "read": true, 00:36:56.849 "write": true, 00:36:56.849 "unmap": true, 00:36:56.849 "flush": true, 00:36:56.849 "reset": true, 00:36:56.849 "nvme_admin": false, 00:36:56.849 "nvme_io": false, 00:36:56.849 "nvme_io_md": false, 00:36:56.849 "write_zeroes": true, 00:36:56.849 "zcopy": true, 00:36:56.849 "get_zone_info": false, 00:36:56.849 "zone_management": false, 00:36:56.849 "zone_append": false, 00:36:56.849 "compare": false, 00:36:56.849 "compare_and_write": false, 00:36:56.849 "abort": true, 00:36:56.849 "seek_hole": false, 00:36:56.850 "seek_data": false, 00:36:56.850 "copy": true, 00:36:56.850 "nvme_iov_md": false 00:36:56.850 }, 00:36:56.850 "memory_domains": [ 00:36:56.850 { 00:36:56.850 "dma_device_id": "system", 00:36:56.850 "dma_device_type": 1 00:36:56.850 }, 00:36:56.850 { 00:36:56.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:56.850 "dma_device_type": 2 00:36:56.850 } 00:36:56.850 ], 00:36:56.850 "driver_specific": {} 00:36:56.850 } 00:36:56.850 ] 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:56.850 "name": "Existed_Raid", 00:36:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.850 "strip_size_kb": 64, 00:36:56.850 "state": "configuring", 00:36:56.850 "raid_level": "raid5f", 00:36:56.850 "superblock": false, 00:36:56.850 "num_base_bdevs": 4, 00:36:56.850 "num_base_bdevs_discovered": 1, 00:36:56.850 "num_base_bdevs_operational": 4, 00:36:56.850 "base_bdevs_list": [ 00:36:56.850 { 00:36:56.850 "name": "BaseBdev1", 00:36:56.850 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:56.850 "is_configured": true, 00:36:56.850 "data_offset": 0, 00:36:56.850 "data_size": 65536 00:36:56.850 }, 00:36:56.850 { 00:36:56.850 "name": "BaseBdev2", 00:36:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.850 "is_configured": false, 00:36:56.850 "data_offset": 0, 00:36:56.850 "data_size": 0 00:36:56.850 }, 00:36:56.850 { 00:36:56.850 "name": "BaseBdev3", 00:36:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.850 "is_configured": false, 00:36:56.850 "data_offset": 0, 00:36:56.850 "data_size": 0 00:36:56.850 }, 00:36:56.850 { 00:36:56.850 "name": "BaseBdev4", 00:36:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:56.850 "is_configured": false, 00:36:56.850 "data_offset": 0, 00:36:56.850 "data_size": 0 00:36:56.850 } 00:36:56.850 ] 00:36:56.850 }' 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:56.850 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.416 [2024-10-09 14:06:03.691749] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:57.416 [2024-10-09 14:06:03.691807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.416 [2024-10-09 14:06:03.703788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:57.416 [2024-10-09 14:06:03.706031] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:57.416 [2024-10-09 14:06:03.706075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:57.416 [2024-10-09 14:06:03.706086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:36:57.416 [2024-10-09 14:06:03.706099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:36:57.416 [2024-10-09 14:06:03.706107] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:36:57.416 [2024-10-09 14:06:03.706119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:57.416 "name": "Existed_Raid", 00:36:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.416 "strip_size_kb": 64, 00:36:57.416 "state": "configuring", 00:36:57.416 "raid_level": "raid5f", 00:36:57.416 "superblock": false, 00:36:57.416 "num_base_bdevs": 4, 00:36:57.416 "num_base_bdevs_discovered": 1, 00:36:57.416 "num_base_bdevs_operational": 4, 00:36:57.416 "base_bdevs_list": [ 00:36:57.416 { 00:36:57.416 "name": "BaseBdev1", 00:36:57.416 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:57.416 "is_configured": true, 00:36:57.416 "data_offset": 0, 00:36:57.416 "data_size": 65536 00:36:57.416 }, 00:36:57.416 { 00:36:57.416 "name": "BaseBdev2", 00:36:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.416 "is_configured": false, 00:36:57.416 "data_offset": 0, 00:36:57.416 "data_size": 0 00:36:57.416 }, 00:36:57.416 { 00:36:57.416 "name": "BaseBdev3", 00:36:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.416 "is_configured": false, 00:36:57.416 "data_offset": 0, 00:36:57.416 "data_size": 0 00:36:57.416 }, 00:36:57.416 { 00:36:57.416 "name": "BaseBdev4", 00:36:57.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.416 "is_configured": false, 00:36:57.416 "data_offset": 0, 00:36:57.416 "data_size": 0 00:36:57.416 } 00:36:57.416 ] 00:36:57.416 }' 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:57.416 14:06:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 [2024-10-09 14:06:04.173418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:57.675 BaseBdev2 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 [ 00:36:57.675 { 00:36:57.675 "name": "BaseBdev2", 00:36:57.675 "aliases": [ 00:36:57.675 "66226c4a-366a-4e48-a6cd-8fefabb83ec8" 00:36:57.675 ], 00:36:57.675 "product_name": "Malloc disk", 00:36:57.675 "block_size": 512, 00:36:57.675 "num_blocks": 65536, 00:36:57.675 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:57.675 "assigned_rate_limits": { 00:36:57.675 "rw_ios_per_sec": 0, 00:36:57.675 "rw_mbytes_per_sec": 0, 00:36:57.675 "r_mbytes_per_sec": 0, 00:36:57.675 "w_mbytes_per_sec": 0 00:36:57.675 }, 00:36:57.675 "claimed": true, 00:36:57.675 "claim_type": "exclusive_write", 00:36:57.675 "zoned": false, 00:36:57.675 "supported_io_types": { 00:36:57.675 "read": true, 00:36:57.675 "write": true, 00:36:57.675 "unmap": true, 00:36:57.675 "flush": true, 00:36:57.675 "reset": true, 00:36:57.675 "nvme_admin": false, 00:36:57.675 "nvme_io": false, 00:36:57.675 "nvme_io_md": false, 00:36:57.675 "write_zeroes": true, 00:36:57.675 "zcopy": true, 00:36:57.675 "get_zone_info": false, 00:36:57.675 "zone_management": false, 00:36:57.675 "zone_append": false, 00:36:57.675 "compare": false, 00:36:57.675 "compare_and_write": false, 00:36:57.675 "abort": true, 00:36:57.675 "seek_hole": false, 00:36:57.675 "seek_data": false, 00:36:57.675 "copy": true, 00:36:57.675 "nvme_iov_md": false 00:36:57.675 }, 00:36:57.675 "memory_domains": [ 00:36:57.675 { 00:36:57.675 "dma_device_id": "system", 00:36:57.675 "dma_device_type": 1 00:36:57.675 }, 00:36:57.675 { 00:36:57.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.675 "dma_device_type": 2 00:36:57.675 } 00:36:57.675 ], 00:36:57.675 "driver_specific": {} 00:36:57.675 } 00:36:57.675 ] 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:57.675 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:57.933 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:57.933 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:57.933 "name": "Existed_Raid", 00:36:57.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.933 "strip_size_kb": 64, 00:36:57.933 "state": "configuring", 00:36:57.933 "raid_level": "raid5f", 00:36:57.933 "superblock": false, 00:36:57.933 "num_base_bdevs": 4, 00:36:57.933 "num_base_bdevs_discovered": 2, 00:36:57.933 "num_base_bdevs_operational": 4, 00:36:57.933 "base_bdevs_list": [ 00:36:57.933 { 00:36:57.933 "name": "BaseBdev1", 00:36:57.933 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:57.933 "is_configured": true, 00:36:57.933 "data_offset": 0, 00:36:57.933 "data_size": 65536 00:36:57.933 }, 00:36:57.933 { 00:36:57.933 "name": "BaseBdev2", 00:36:57.933 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:57.933 "is_configured": true, 00:36:57.933 "data_offset": 0, 00:36:57.933 "data_size": 65536 00:36:57.933 }, 00:36:57.933 { 00:36:57.933 "name": "BaseBdev3", 00:36:57.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.934 "is_configured": false, 00:36:57.934 "data_offset": 0, 00:36:57.934 "data_size": 0 00:36:57.934 }, 00:36:57.934 { 00:36:57.934 "name": "BaseBdev4", 00:36:57.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.934 "is_configured": false, 00:36:57.934 "data_offset": 0, 00:36:57.934 "data_size": 0 00:36:57.934 } 00:36:57.934 ] 00:36:57.934 }' 00:36:57.934 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:57.934 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.192 [2024-10-09 14:06:04.660580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:58.192 BaseBdev3 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.192 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.192 [ 00:36:58.192 { 00:36:58.192 "name": "BaseBdev3", 00:36:58.192 "aliases": [ 00:36:58.192 "ece14768-c267-4263-9d2f-3a93ccafc1c2" 00:36:58.192 ], 00:36:58.192 "product_name": "Malloc disk", 00:36:58.192 "block_size": 512, 00:36:58.192 "num_blocks": 65536, 00:36:58.192 "uuid": "ece14768-c267-4263-9d2f-3a93ccafc1c2", 00:36:58.192 "assigned_rate_limits": { 00:36:58.192 "rw_ios_per_sec": 0, 00:36:58.192 "rw_mbytes_per_sec": 0, 00:36:58.192 "r_mbytes_per_sec": 0, 00:36:58.192 "w_mbytes_per_sec": 0 00:36:58.192 }, 00:36:58.192 "claimed": true, 00:36:58.192 "claim_type": "exclusive_write", 00:36:58.192 "zoned": false, 00:36:58.192 "supported_io_types": { 00:36:58.192 "read": true, 00:36:58.192 "write": true, 00:36:58.192 "unmap": true, 00:36:58.192 "flush": true, 00:36:58.192 "reset": true, 00:36:58.192 "nvme_admin": false, 00:36:58.192 "nvme_io": false, 00:36:58.192 "nvme_io_md": false, 00:36:58.192 "write_zeroes": true, 00:36:58.192 "zcopy": true, 00:36:58.192 "get_zone_info": false, 00:36:58.192 "zone_management": false, 00:36:58.192 "zone_append": false, 00:36:58.192 "compare": false, 00:36:58.192 "compare_and_write": false, 00:36:58.192 "abort": true, 00:36:58.192 "seek_hole": false, 00:36:58.192 "seek_data": false, 00:36:58.192 "copy": true, 00:36:58.192 "nvme_iov_md": false 00:36:58.192 }, 00:36:58.192 "memory_domains": [ 00:36:58.192 { 00:36:58.192 "dma_device_id": "system", 00:36:58.192 "dma_device_type": 1 00:36:58.192 }, 00:36:58.192 { 00:36:58.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.193 "dma_device_type": 2 00:36:58.193 } 00:36:58.193 ], 00:36:58.193 "driver_specific": {} 00:36:58.193 } 00:36:58.193 ] 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:58.193 "name": "Existed_Raid", 00:36:58.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.193 "strip_size_kb": 64, 00:36:58.193 "state": "configuring", 00:36:58.193 "raid_level": "raid5f", 00:36:58.193 "superblock": false, 00:36:58.193 "num_base_bdevs": 4, 00:36:58.193 "num_base_bdevs_discovered": 3, 00:36:58.193 "num_base_bdevs_operational": 4, 00:36:58.193 "base_bdevs_list": [ 00:36:58.193 { 00:36:58.193 "name": "BaseBdev1", 00:36:58.193 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:58.193 "is_configured": true, 00:36:58.193 "data_offset": 0, 00:36:58.193 "data_size": 65536 00:36:58.193 }, 00:36:58.193 { 00:36:58.193 "name": "BaseBdev2", 00:36:58.193 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:58.193 "is_configured": true, 00:36:58.193 "data_offset": 0, 00:36:58.193 "data_size": 65536 00:36:58.193 }, 00:36:58.193 { 00:36:58.193 "name": "BaseBdev3", 00:36:58.193 "uuid": "ece14768-c267-4263-9d2f-3a93ccafc1c2", 00:36:58.193 "is_configured": true, 00:36:58.193 "data_offset": 0, 00:36:58.193 "data_size": 65536 00:36:58.193 }, 00:36:58.193 { 00:36:58.193 "name": "BaseBdev4", 00:36:58.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:58.193 "is_configured": false, 00:36:58.193 "data_offset": 0, 00:36:58.193 "data_size": 0 00:36:58.193 } 00:36:58.193 ] 00:36:58.193 }' 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:58.193 14:06:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.760 [2024-10-09 14:06:05.147732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:58.760 [2024-10-09 14:06:05.147794] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:36:58.760 [2024-10-09 14:06:05.147804] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:36:58.760 [2024-10-09 14:06:05.148093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:58.760 [2024-10-09 14:06:05.148588] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:36:58.760 [2024-10-09 14:06:05.148610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:36:58.760 [2024-10-09 14:06:05.148811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:58.760 BaseBdev4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.760 [ 00:36:58.760 { 00:36:58.760 "name": "BaseBdev4", 00:36:58.760 "aliases": [ 00:36:58.760 "c56103a1-a94a-49b3-a4fd-009230d0d89f" 00:36:58.760 ], 00:36:58.760 "product_name": "Malloc disk", 00:36:58.760 "block_size": 512, 00:36:58.760 "num_blocks": 65536, 00:36:58.760 "uuid": "c56103a1-a94a-49b3-a4fd-009230d0d89f", 00:36:58.760 "assigned_rate_limits": { 00:36:58.760 "rw_ios_per_sec": 0, 00:36:58.760 "rw_mbytes_per_sec": 0, 00:36:58.760 "r_mbytes_per_sec": 0, 00:36:58.760 "w_mbytes_per_sec": 0 00:36:58.760 }, 00:36:58.760 "claimed": true, 00:36:58.760 "claim_type": "exclusive_write", 00:36:58.760 "zoned": false, 00:36:58.760 "supported_io_types": { 00:36:58.760 "read": true, 00:36:58.760 "write": true, 00:36:58.760 "unmap": true, 00:36:58.760 "flush": true, 00:36:58.760 "reset": true, 00:36:58.760 "nvme_admin": false, 00:36:58.760 "nvme_io": false, 00:36:58.760 "nvme_io_md": false, 00:36:58.760 "write_zeroes": true, 00:36:58.760 "zcopy": true, 00:36:58.760 "get_zone_info": false, 00:36:58.760 "zone_management": false, 00:36:58.760 "zone_append": false, 00:36:58.760 "compare": false, 00:36:58.760 "compare_and_write": false, 00:36:58.760 "abort": true, 00:36:58.760 "seek_hole": false, 00:36:58.760 "seek_data": false, 00:36:58.760 "copy": true, 00:36:58.760 "nvme_iov_md": false 00:36:58.760 }, 00:36:58.760 "memory_domains": [ 00:36:58.760 { 00:36:58.760 "dma_device_id": "system", 00:36:58.760 "dma_device_type": 1 00:36:58.760 }, 00:36:58.760 { 00:36:58.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.760 "dma_device_type": 2 00:36:58.760 } 00:36:58.760 ], 00:36:58.760 "driver_specific": {} 00:36:58.760 } 00:36:58.760 ] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:58.760 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:58.760 "name": "Existed_Raid", 00:36:58.760 "uuid": "7b50df3b-d724-4cea-b95a-9eeaa78df54d", 00:36:58.760 "strip_size_kb": 64, 00:36:58.760 "state": "online", 00:36:58.760 "raid_level": "raid5f", 00:36:58.760 "superblock": false, 00:36:58.760 "num_base_bdevs": 4, 00:36:58.760 "num_base_bdevs_discovered": 4, 00:36:58.760 "num_base_bdevs_operational": 4, 00:36:58.760 "base_bdevs_list": [ 00:36:58.760 { 00:36:58.760 "name": "BaseBdev1", 00:36:58.760 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:58.760 "is_configured": true, 00:36:58.760 "data_offset": 0, 00:36:58.760 "data_size": 65536 00:36:58.760 }, 00:36:58.760 { 00:36:58.760 "name": "BaseBdev2", 00:36:58.760 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:58.760 "is_configured": true, 00:36:58.760 "data_offset": 0, 00:36:58.760 "data_size": 65536 00:36:58.760 }, 00:36:58.760 { 00:36:58.761 "name": "BaseBdev3", 00:36:58.761 "uuid": "ece14768-c267-4263-9d2f-3a93ccafc1c2", 00:36:58.761 "is_configured": true, 00:36:58.761 "data_offset": 0, 00:36:58.761 "data_size": 65536 00:36:58.761 }, 00:36:58.761 { 00:36:58.761 "name": "BaseBdev4", 00:36:58.761 "uuid": "c56103a1-a94a-49b3-a4fd-009230d0d89f", 00:36:58.761 "is_configured": true, 00:36:58.761 "data_offset": 0, 00:36:58.761 "data_size": 65536 00:36:58.761 } 00:36:58.761 ] 00:36:58.761 }' 00:36:58.761 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:58.761 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.328 [2024-10-09 14:06:05.644120] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:59.328 "name": "Existed_Raid", 00:36:59.328 "aliases": [ 00:36:59.328 "7b50df3b-d724-4cea-b95a-9eeaa78df54d" 00:36:59.328 ], 00:36:59.328 "product_name": "Raid Volume", 00:36:59.328 "block_size": 512, 00:36:59.328 "num_blocks": 196608, 00:36:59.328 "uuid": "7b50df3b-d724-4cea-b95a-9eeaa78df54d", 00:36:59.328 "assigned_rate_limits": { 00:36:59.328 "rw_ios_per_sec": 0, 00:36:59.328 "rw_mbytes_per_sec": 0, 00:36:59.328 "r_mbytes_per_sec": 0, 00:36:59.328 "w_mbytes_per_sec": 0 00:36:59.328 }, 00:36:59.328 "claimed": false, 00:36:59.328 "zoned": false, 00:36:59.328 "supported_io_types": { 00:36:59.328 "read": true, 00:36:59.328 "write": true, 00:36:59.328 "unmap": false, 00:36:59.328 "flush": false, 00:36:59.328 "reset": true, 00:36:59.328 "nvme_admin": false, 00:36:59.328 "nvme_io": false, 00:36:59.328 "nvme_io_md": false, 00:36:59.328 "write_zeroes": true, 00:36:59.328 "zcopy": false, 00:36:59.328 "get_zone_info": false, 00:36:59.328 "zone_management": false, 00:36:59.328 "zone_append": false, 00:36:59.328 "compare": false, 00:36:59.328 "compare_and_write": false, 00:36:59.328 "abort": false, 00:36:59.328 "seek_hole": false, 00:36:59.328 "seek_data": false, 00:36:59.328 "copy": false, 00:36:59.328 "nvme_iov_md": false 00:36:59.328 }, 00:36:59.328 "driver_specific": { 00:36:59.328 "raid": { 00:36:59.328 "uuid": "7b50df3b-d724-4cea-b95a-9eeaa78df54d", 00:36:59.328 "strip_size_kb": 64, 00:36:59.328 "state": "online", 00:36:59.328 "raid_level": "raid5f", 00:36:59.328 "superblock": false, 00:36:59.328 "num_base_bdevs": 4, 00:36:59.328 "num_base_bdevs_discovered": 4, 00:36:59.328 "num_base_bdevs_operational": 4, 00:36:59.328 "base_bdevs_list": [ 00:36:59.328 { 00:36:59.328 "name": "BaseBdev1", 00:36:59.328 "uuid": "1cdc6dc5-a1cd-4080-82a5-10229ad79aa6", 00:36:59.328 "is_configured": true, 00:36:59.328 "data_offset": 0, 00:36:59.328 "data_size": 65536 00:36:59.328 }, 00:36:59.328 { 00:36:59.328 "name": "BaseBdev2", 00:36:59.328 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:59.328 "is_configured": true, 00:36:59.328 "data_offset": 0, 00:36:59.328 "data_size": 65536 00:36:59.328 }, 00:36:59.328 { 00:36:59.328 "name": "BaseBdev3", 00:36:59.328 "uuid": "ece14768-c267-4263-9d2f-3a93ccafc1c2", 00:36:59.328 "is_configured": true, 00:36:59.328 "data_offset": 0, 00:36:59.328 "data_size": 65536 00:36:59.328 }, 00:36:59.328 { 00:36:59.328 "name": "BaseBdev4", 00:36:59.328 "uuid": "c56103a1-a94a-49b3-a4fd-009230d0d89f", 00:36:59.328 "is_configured": true, 00:36:59.328 "data_offset": 0, 00:36:59.328 "data_size": 65536 00:36:59.328 } 00:36:59.328 ] 00:36:59.328 } 00:36:59.328 } 00:36:59.328 }' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:59.328 BaseBdev2 00:36:59.328 BaseBdev3 00:36:59.328 BaseBdev4' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:59.328 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.329 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.592 [2024-10-09 14:06:05.963984] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:36:59.592 14:06:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:59.592 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.592 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:59.592 "name": "Existed_Raid", 00:36:59.592 "uuid": "7b50df3b-d724-4cea-b95a-9eeaa78df54d", 00:36:59.592 "strip_size_kb": 64, 00:36:59.592 "state": "online", 00:36:59.592 "raid_level": "raid5f", 00:36:59.592 "superblock": false, 00:36:59.592 "num_base_bdevs": 4, 00:36:59.592 "num_base_bdevs_discovered": 3, 00:36:59.592 "num_base_bdevs_operational": 3, 00:36:59.592 "base_bdevs_list": [ 00:36:59.592 { 00:36:59.592 "name": null, 00:36:59.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:59.592 "is_configured": false, 00:36:59.592 "data_offset": 0, 00:36:59.592 "data_size": 65536 00:36:59.592 }, 00:36:59.592 { 00:36:59.592 "name": "BaseBdev2", 00:36:59.592 "uuid": "66226c4a-366a-4e48-a6cd-8fefabb83ec8", 00:36:59.592 "is_configured": true, 00:36:59.592 "data_offset": 0, 00:36:59.592 "data_size": 65536 00:36:59.592 }, 00:36:59.592 { 00:36:59.592 "name": "BaseBdev3", 00:36:59.592 "uuid": "ece14768-c267-4263-9d2f-3a93ccafc1c2", 00:36:59.592 "is_configured": true, 00:36:59.592 "data_offset": 0, 00:36:59.592 "data_size": 65536 00:36:59.592 }, 00:36:59.592 { 00:36:59.592 "name": "BaseBdev4", 00:36:59.592 "uuid": "c56103a1-a94a-49b3-a4fd-009230d0d89f", 00:36:59.592 "is_configured": true, 00:36:59.592 "data_offset": 0, 00:36:59.592 "data_size": 65536 00:36:59.592 } 00:36:59.592 ] 00:36:59.592 }' 00:36:59.592 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:59.592 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 [2024-10-09 14:06:06.488042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:00.159 [2024-10-09 14:06:06.488139] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:00.159 [2024-10-09 14:06:06.499944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 [2024-10-09 14:06:06.556003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 [2024-10-09 14:06:06.623796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:00.159 [2024-10-09 14:06:06.623838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.159 BaseBdev2 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.159 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 [ 00:37:00.418 { 00:37:00.418 "name": "BaseBdev2", 00:37:00.418 "aliases": [ 00:37:00.418 "f2817130-3359-44b2-946c-9894958d71a2" 00:37:00.418 ], 00:37:00.418 "product_name": "Malloc disk", 00:37:00.418 "block_size": 512, 00:37:00.418 "num_blocks": 65536, 00:37:00.418 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:00.418 "assigned_rate_limits": { 00:37:00.418 "rw_ios_per_sec": 0, 00:37:00.418 "rw_mbytes_per_sec": 0, 00:37:00.418 "r_mbytes_per_sec": 0, 00:37:00.418 "w_mbytes_per_sec": 0 00:37:00.418 }, 00:37:00.418 "claimed": false, 00:37:00.418 "zoned": false, 00:37:00.418 "supported_io_types": { 00:37:00.418 "read": true, 00:37:00.418 "write": true, 00:37:00.418 "unmap": true, 00:37:00.418 "flush": true, 00:37:00.418 "reset": true, 00:37:00.418 "nvme_admin": false, 00:37:00.418 "nvme_io": false, 00:37:00.418 "nvme_io_md": false, 00:37:00.418 "write_zeroes": true, 00:37:00.418 "zcopy": true, 00:37:00.418 "get_zone_info": false, 00:37:00.418 "zone_management": false, 00:37:00.418 "zone_append": false, 00:37:00.418 "compare": false, 00:37:00.418 "compare_and_write": false, 00:37:00.418 "abort": true, 00:37:00.418 "seek_hole": false, 00:37:00.418 "seek_data": false, 00:37:00.418 "copy": true, 00:37:00.418 "nvme_iov_md": false 00:37:00.418 }, 00:37:00.418 "memory_domains": [ 00:37:00.418 { 00:37:00.418 "dma_device_id": "system", 00:37:00.418 "dma_device_type": 1 00:37:00.418 }, 00:37:00.418 { 00:37:00.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.418 "dma_device_type": 2 00:37:00.418 } 00:37:00.418 ], 00:37:00.418 "driver_specific": {} 00:37:00.418 } 00:37:00.418 ] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 BaseBdev3 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 [ 00:37:00.418 { 00:37:00.418 "name": "BaseBdev3", 00:37:00.418 "aliases": [ 00:37:00.418 "48e7c072-7d74-496b-88bb-0e07ab3a2fca" 00:37:00.418 ], 00:37:00.418 "product_name": "Malloc disk", 00:37:00.418 "block_size": 512, 00:37:00.418 "num_blocks": 65536, 00:37:00.418 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:00.418 "assigned_rate_limits": { 00:37:00.418 "rw_ios_per_sec": 0, 00:37:00.418 "rw_mbytes_per_sec": 0, 00:37:00.418 "r_mbytes_per_sec": 0, 00:37:00.418 "w_mbytes_per_sec": 0 00:37:00.418 }, 00:37:00.418 "claimed": false, 00:37:00.418 "zoned": false, 00:37:00.418 "supported_io_types": { 00:37:00.418 "read": true, 00:37:00.418 "write": true, 00:37:00.418 "unmap": true, 00:37:00.418 "flush": true, 00:37:00.418 "reset": true, 00:37:00.418 "nvme_admin": false, 00:37:00.418 "nvme_io": false, 00:37:00.418 "nvme_io_md": false, 00:37:00.418 "write_zeroes": true, 00:37:00.418 "zcopy": true, 00:37:00.418 "get_zone_info": false, 00:37:00.418 "zone_management": false, 00:37:00.418 "zone_append": false, 00:37:00.418 "compare": false, 00:37:00.418 "compare_and_write": false, 00:37:00.418 "abort": true, 00:37:00.418 "seek_hole": false, 00:37:00.418 "seek_data": false, 00:37:00.418 "copy": true, 00:37:00.418 "nvme_iov_md": false 00:37:00.418 }, 00:37:00.418 "memory_domains": [ 00:37:00.418 { 00:37:00.418 "dma_device_id": "system", 00:37:00.418 "dma_device_type": 1 00:37:00.418 }, 00:37:00.418 { 00:37:00.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.418 "dma_device_type": 2 00:37:00.418 } 00:37:00.418 ], 00:37:00.418 "driver_specific": {} 00:37:00.418 } 00:37:00.418 ] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.418 BaseBdev4 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:37:00.418 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.419 [ 00:37:00.419 { 00:37:00.419 "name": "BaseBdev4", 00:37:00.419 "aliases": [ 00:37:00.419 "88c201b1-4dac-4eca-8484-fb84a3dc86f3" 00:37:00.419 ], 00:37:00.419 "product_name": "Malloc disk", 00:37:00.419 "block_size": 512, 00:37:00.419 "num_blocks": 65536, 00:37:00.419 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:00.419 "assigned_rate_limits": { 00:37:00.419 "rw_ios_per_sec": 0, 00:37:00.419 "rw_mbytes_per_sec": 0, 00:37:00.419 "r_mbytes_per_sec": 0, 00:37:00.419 "w_mbytes_per_sec": 0 00:37:00.419 }, 00:37:00.419 "claimed": false, 00:37:00.419 "zoned": false, 00:37:00.419 "supported_io_types": { 00:37:00.419 "read": true, 00:37:00.419 "write": true, 00:37:00.419 "unmap": true, 00:37:00.419 "flush": true, 00:37:00.419 "reset": true, 00:37:00.419 "nvme_admin": false, 00:37:00.419 "nvme_io": false, 00:37:00.419 "nvme_io_md": false, 00:37:00.419 "write_zeroes": true, 00:37:00.419 "zcopy": true, 00:37:00.419 "get_zone_info": false, 00:37:00.419 "zone_management": false, 00:37:00.419 "zone_append": false, 00:37:00.419 "compare": false, 00:37:00.419 "compare_and_write": false, 00:37:00.419 "abort": true, 00:37:00.419 "seek_hole": false, 00:37:00.419 "seek_data": false, 00:37:00.419 "copy": true, 00:37:00.419 "nvme_iov_md": false 00:37:00.419 }, 00:37:00.419 "memory_domains": [ 00:37:00.419 { 00:37:00.419 "dma_device_id": "system", 00:37:00.419 "dma_device_type": 1 00:37:00.419 }, 00:37:00.419 { 00:37:00.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:00.419 "dma_device_type": 2 00:37:00.419 } 00:37:00.419 ], 00:37:00.419 "driver_specific": {} 00:37:00.419 } 00:37:00.419 ] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.419 [2024-10-09 14:06:06.806022] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:00.419 [2024-10-09 14:06:06.806075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:00.419 [2024-10-09 14:06:06.806097] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:00.419 [2024-10-09 14:06:06.808272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:00.419 [2024-10-09 14:06:06.808324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.419 "name": "Existed_Raid", 00:37:00.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.419 "strip_size_kb": 64, 00:37:00.419 "state": "configuring", 00:37:00.419 "raid_level": "raid5f", 00:37:00.419 "superblock": false, 00:37:00.419 "num_base_bdevs": 4, 00:37:00.419 "num_base_bdevs_discovered": 3, 00:37:00.419 "num_base_bdevs_operational": 4, 00:37:00.419 "base_bdevs_list": [ 00:37:00.419 { 00:37:00.419 "name": "BaseBdev1", 00:37:00.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.419 "is_configured": false, 00:37:00.419 "data_offset": 0, 00:37:00.419 "data_size": 0 00:37:00.419 }, 00:37:00.419 { 00:37:00.419 "name": "BaseBdev2", 00:37:00.419 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:00.419 "is_configured": true, 00:37:00.419 "data_offset": 0, 00:37:00.419 "data_size": 65536 00:37:00.419 }, 00:37:00.419 { 00:37:00.419 "name": "BaseBdev3", 00:37:00.419 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:00.419 "is_configured": true, 00:37:00.419 "data_offset": 0, 00:37:00.419 "data_size": 65536 00:37:00.419 }, 00:37:00.419 { 00:37:00.419 "name": "BaseBdev4", 00:37:00.419 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:00.419 "is_configured": true, 00:37:00.419 "data_offset": 0, 00:37:00.419 "data_size": 65536 00:37:00.419 } 00:37:00.419 ] 00:37:00.419 }' 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.419 14:06:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.986 [2024-10-09 14:06:07.270117] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.986 "name": "Existed_Raid", 00:37:00.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.986 "strip_size_kb": 64, 00:37:00.986 "state": "configuring", 00:37:00.986 "raid_level": "raid5f", 00:37:00.986 "superblock": false, 00:37:00.986 "num_base_bdevs": 4, 00:37:00.986 "num_base_bdevs_discovered": 2, 00:37:00.986 "num_base_bdevs_operational": 4, 00:37:00.986 "base_bdevs_list": [ 00:37:00.986 { 00:37:00.986 "name": "BaseBdev1", 00:37:00.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.986 "is_configured": false, 00:37:00.986 "data_offset": 0, 00:37:00.986 "data_size": 0 00:37:00.986 }, 00:37:00.986 { 00:37:00.986 "name": null, 00:37:00.986 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:00.986 "is_configured": false, 00:37:00.986 "data_offset": 0, 00:37:00.986 "data_size": 65536 00:37:00.986 }, 00:37:00.986 { 00:37:00.986 "name": "BaseBdev3", 00:37:00.986 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:00.986 "is_configured": true, 00:37:00.986 "data_offset": 0, 00:37:00.986 "data_size": 65536 00:37:00.986 }, 00:37:00.986 { 00:37:00.986 "name": "BaseBdev4", 00:37:00.986 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:00.986 "is_configured": true, 00:37:00.986 "data_offset": 0, 00:37:00.986 "data_size": 65536 00:37:00.986 } 00:37:00.986 ] 00:37:00.986 }' 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.986 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 [2024-10-09 14:06:07.781163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:01.245 BaseBdev1 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.245 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.503 [ 00:37:01.503 { 00:37:01.503 "name": "BaseBdev1", 00:37:01.503 "aliases": [ 00:37:01.503 "b3493c60-9b80-4e18-963d-967d718ca0dc" 00:37:01.503 ], 00:37:01.503 "product_name": "Malloc disk", 00:37:01.503 "block_size": 512, 00:37:01.503 "num_blocks": 65536, 00:37:01.503 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:01.503 "assigned_rate_limits": { 00:37:01.503 "rw_ios_per_sec": 0, 00:37:01.503 "rw_mbytes_per_sec": 0, 00:37:01.503 "r_mbytes_per_sec": 0, 00:37:01.503 "w_mbytes_per_sec": 0 00:37:01.503 }, 00:37:01.503 "claimed": true, 00:37:01.503 "claim_type": "exclusive_write", 00:37:01.503 "zoned": false, 00:37:01.503 "supported_io_types": { 00:37:01.503 "read": true, 00:37:01.503 "write": true, 00:37:01.503 "unmap": true, 00:37:01.503 "flush": true, 00:37:01.503 "reset": true, 00:37:01.503 "nvme_admin": false, 00:37:01.503 "nvme_io": false, 00:37:01.503 "nvme_io_md": false, 00:37:01.503 "write_zeroes": true, 00:37:01.503 "zcopy": true, 00:37:01.503 "get_zone_info": false, 00:37:01.503 "zone_management": false, 00:37:01.503 "zone_append": false, 00:37:01.503 "compare": false, 00:37:01.503 "compare_and_write": false, 00:37:01.503 "abort": true, 00:37:01.503 "seek_hole": false, 00:37:01.503 "seek_data": false, 00:37:01.503 "copy": true, 00:37:01.503 "nvme_iov_md": false 00:37:01.503 }, 00:37:01.503 "memory_domains": [ 00:37:01.503 { 00:37:01.503 "dma_device_id": "system", 00:37:01.503 "dma_device_type": 1 00:37:01.503 }, 00:37:01.503 { 00:37:01.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:01.503 "dma_device_type": 2 00:37:01.503 } 00:37:01.503 ], 00:37:01.503 "driver_specific": {} 00:37:01.503 } 00:37:01.503 ] 00:37:01.503 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.503 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:37:01.503 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:01.504 "name": "Existed_Raid", 00:37:01.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.504 "strip_size_kb": 64, 00:37:01.504 "state": "configuring", 00:37:01.504 "raid_level": "raid5f", 00:37:01.504 "superblock": false, 00:37:01.504 "num_base_bdevs": 4, 00:37:01.504 "num_base_bdevs_discovered": 3, 00:37:01.504 "num_base_bdevs_operational": 4, 00:37:01.504 "base_bdevs_list": [ 00:37:01.504 { 00:37:01.504 "name": "BaseBdev1", 00:37:01.504 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:01.504 "is_configured": true, 00:37:01.504 "data_offset": 0, 00:37:01.504 "data_size": 65536 00:37:01.504 }, 00:37:01.504 { 00:37:01.504 "name": null, 00:37:01.504 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:01.504 "is_configured": false, 00:37:01.504 "data_offset": 0, 00:37:01.504 "data_size": 65536 00:37:01.504 }, 00:37:01.504 { 00:37:01.504 "name": "BaseBdev3", 00:37:01.504 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:01.504 "is_configured": true, 00:37:01.504 "data_offset": 0, 00:37:01.504 "data_size": 65536 00:37:01.504 }, 00:37:01.504 { 00:37:01.504 "name": "BaseBdev4", 00:37:01.504 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:01.504 "is_configured": true, 00:37:01.504 "data_offset": 0, 00:37:01.504 "data_size": 65536 00:37:01.504 } 00:37:01.504 ] 00:37:01.504 }' 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:01.504 14:06:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.764 [2024-10-09 14:06:08.297320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:01.764 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.023 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.023 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:02.023 "name": "Existed_Raid", 00:37:02.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.023 "strip_size_kb": 64, 00:37:02.023 "state": "configuring", 00:37:02.023 "raid_level": "raid5f", 00:37:02.023 "superblock": false, 00:37:02.023 "num_base_bdevs": 4, 00:37:02.023 "num_base_bdevs_discovered": 2, 00:37:02.023 "num_base_bdevs_operational": 4, 00:37:02.023 "base_bdevs_list": [ 00:37:02.023 { 00:37:02.023 "name": "BaseBdev1", 00:37:02.023 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:02.023 "is_configured": true, 00:37:02.023 "data_offset": 0, 00:37:02.023 "data_size": 65536 00:37:02.023 }, 00:37:02.023 { 00:37:02.023 "name": null, 00:37:02.023 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:02.023 "is_configured": false, 00:37:02.023 "data_offset": 0, 00:37:02.023 "data_size": 65536 00:37:02.023 }, 00:37:02.023 { 00:37:02.023 "name": null, 00:37:02.023 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:02.023 "is_configured": false, 00:37:02.023 "data_offset": 0, 00:37:02.023 "data_size": 65536 00:37:02.023 }, 00:37:02.023 { 00:37:02.023 "name": "BaseBdev4", 00:37:02.023 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:02.023 "is_configured": true, 00:37:02.023 "data_offset": 0, 00:37:02.023 "data_size": 65536 00:37:02.023 } 00:37:02.023 ] 00:37:02.023 }' 00:37:02.023 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:02.023 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.281 [2024-10-09 14:06:08.809479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.281 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.539 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.539 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:02.539 "name": "Existed_Raid", 00:37:02.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.539 "strip_size_kb": 64, 00:37:02.539 "state": "configuring", 00:37:02.539 "raid_level": "raid5f", 00:37:02.539 "superblock": false, 00:37:02.539 "num_base_bdevs": 4, 00:37:02.539 "num_base_bdevs_discovered": 3, 00:37:02.539 "num_base_bdevs_operational": 4, 00:37:02.539 "base_bdevs_list": [ 00:37:02.539 { 00:37:02.539 "name": "BaseBdev1", 00:37:02.539 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:02.539 "is_configured": true, 00:37:02.539 "data_offset": 0, 00:37:02.539 "data_size": 65536 00:37:02.539 }, 00:37:02.539 { 00:37:02.540 "name": null, 00:37:02.540 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:02.540 "is_configured": false, 00:37:02.540 "data_offset": 0, 00:37:02.540 "data_size": 65536 00:37:02.540 }, 00:37:02.540 { 00:37:02.540 "name": "BaseBdev3", 00:37:02.540 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:02.540 "is_configured": true, 00:37:02.540 "data_offset": 0, 00:37:02.540 "data_size": 65536 00:37:02.540 }, 00:37:02.540 { 00:37:02.540 "name": "BaseBdev4", 00:37:02.540 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:02.540 "is_configured": true, 00:37:02.540 "data_offset": 0, 00:37:02.540 "data_size": 65536 00:37:02.540 } 00:37:02.540 ] 00:37:02.540 }' 00:37:02.540 14:06:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:02.540 14:06:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.798 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.798 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.798 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.798 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:02.798 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:02.799 [2024-10-09 14:06:09.321599] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:02.799 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.057 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.057 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:03.057 "name": "Existed_Raid", 00:37:03.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.057 "strip_size_kb": 64, 00:37:03.057 "state": "configuring", 00:37:03.057 "raid_level": "raid5f", 00:37:03.057 "superblock": false, 00:37:03.057 "num_base_bdevs": 4, 00:37:03.057 "num_base_bdevs_discovered": 2, 00:37:03.057 "num_base_bdevs_operational": 4, 00:37:03.057 "base_bdevs_list": [ 00:37:03.057 { 00:37:03.057 "name": null, 00:37:03.057 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:03.057 "is_configured": false, 00:37:03.057 "data_offset": 0, 00:37:03.057 "data_size": 65536 00:37:03.057 }, 00:37:03.057 { 00:37:03.058 "name": null, 00:37:03.058 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:03.058 "is_configured": false, 00:37:03.058 "data_offset": 0, 00:37:03.058 "data_size": 65536 00:37:03.058 }, 00:37:03.058 { 00:37:03.058 "name": "BaseBdev3", 00:37:03.058 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:03.058 "is_configured": true, 00:37:03.058 "data_offset": 0, 00:37:03.058 "data_size": 65536 00:37:03.058 }, 00:37:03.058 { 00:37:03.058 "name": "BaseBdev4", 00:37:03.058 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:03.058 "is_configured": true, 00:37:03.058 "data_offset": 0, 00:37:03.058 "data_size": 65536 00:37:03.058 } 00:37:03.058 ] 00:37:03.058 }' 00:37:03.058 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:03.058 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.316 [2024-10-09 14:06:09.836120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:03.316 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.317 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.576 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:03.576 "name": "Existed_Raid", 00:37:03.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.576 "strip_size_kb": 64, 00:37:03.576 "state": "configuring", 00:37:03.576 "raid_level": "raid5f", 00:37:03.576 "superblock": false, 00:37:03.576 "num_base_bdevs": 4, 00:37:03.576 "num_base_bdevs_discovered": 3, 00:37:03.576 "num_base_bdevs_operational": 4, 00:37:03.576 "base_bdevs_list": [ 00:37:03.576 { 00:37:03.576 "name": null, 00:37:03.576 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:03.576 "is_configured": false, 00:37:03.576 "data_offset": 0, 00:37:03.576 "data_size": 65536 00:37:03.576 }, 00:37:03.576 { 00:37:03.576 "name": "BaseBdev2", 00:37:03.576 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:03.576 "is_configured": true, 00:37:03.576 "data_offset": 0, 00:37:03.576 "data_size": 65536 00:37:03.576 }, 00:37:03.576 { 00:37:03.576 "name": "BaseBdev3", 00:37:03.576 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:03.576 "is_configured": true, 00:37:03.576 "data_offset": 0, 00:37:03.576 "data_size": 65536 00:37:03.576 }, 00:37:03.576 { 00:37:03.576 "name": "BaseBdev4", 00:37:03.576 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:03.576 "is_configured": true, 00:37:03.576 "data_offset": 0, 00:37:03.576 "data_size": 65536 00:37:03.576 } 00:37:03.576 ] 00:37:03.576 }' 00:37:03.576 14:06:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:03.576 14:06:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b3493c60-9b80-4e18-963d-967d718ca0dc 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:03.835 [2024-10-09 14:06:10.379201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:03.835 [2024-10-09 14:06:10.379255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:37:03.835 [2024-10-09 14:06:10.379265] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:03.835 [2024-10-09 14:06:10.379528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:03.835 [2024-10-09 14:06:10.379975] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:37:03.835 [2024-10-09 14:06:10.380000] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:37:03.835 [2024-10-09 14:06:10.380163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:03.835 NewBaseBdev 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.835 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.093 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.093 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:04.093 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.093 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.093 [ 00:37:04.093 { 00:37:04.093 "name": "NewBaseBdev", 00:37:04.093 "aliases": [ 00:37:04.093 "b3493c60-9b80-4e18-963d-967d718ca0dc" 00:37:04.093 ], 00:37:04.093 "product_name": "Malloc disk", 00:37:04.093 "block_size": 512, 00:37:04.093 "num_blocks": 65536, 00:37:04.093 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:04.094 "assigned_rate_limits": { 00:37:04.094 "rw_ios_per_sec": 0, 00:37:04.094 "rw_mbytes_per_sec": 0, 00:37:04.094 "r_mbytes_per_sec": 0, 00:37:04.094 "w_mbytes_per_sec": 0 00:37:04.094 }, 00:37:04.094 "claimed": true, 00:37:04.094 "claim_type": "exclusive_write", 00:37:04.094 "zoned": false, 00:37:04.094 "supported_io_types": { 00:37:04.094 "read": true, 00:37:04.094 "write": true, 00:37:04.094 "unmap": true, 00:37:04.094 "flush": true, 00:37:04.094 "reset": true, 00:37:04.094 "nvme_admin": false, 00:37:04.094 "nvme_io": false, 00:37:04.094 "nvme_io_md": false, 00:37:04.094 "write_zeroes": true, 00:37:04.094 "zcopy": true, 00:37:04.094 "get_zone_info": false, 00:37:04.094 "zone_management": false, 00:37:04.094 "zone_append": false, 00:37:04.094 "compare": false, 00:37:04.094 "compare_and_write": false, 00:37:04.094 "abort": true, 00:37:04.094 "seek_hole": false, 00:37:04.094 "seek_data": false, 00:37:04.094 "copy": true, 00:37:04.094 "nvme_iov_md": false 00:37:04.094 }, 00:37:04.094 "memory_domains": [ 00:37:04.094 { 00:37:04.094 "dma_device_id": "system", 00:37:04.094 "dma_device_type": 1 00:37:04.094 }, 00:37:04.094 { 00:37:04.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.094 "dma_device_type": 2 00:37:04.094 } 00:37:04.094 ], 00:37:04.094 "driver_specific": {} 00:37:04.094 } 00:37:04.094 ] 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:04.094 "name": "Existed_Raid", 00:37:04.094 "uuid": "f2e5ae09-0211-420f-a341-4d7dfb3d9500", 00:37:04.094 "strip_size_kb": 64, 00:37:04.094 "state": "online", 00:37:04.094 "raid_level": "raid5f", 00:37:04.094 "superblock": false, 00:37:04.094 "num_base_bdevs": 4, 00:37:04.094 "num_base_bdevs_discovered": 4, 00:37:04.094 "num_base_bdevs_operational": 4, 00:37:04.094 "base_bdevs_list": [ 00:37:04.094 { 00:37:04.094 "name": "NewBaseBdev", 00:37:04.094 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:04.094 "is_configured": true, 00:37:04.094 "data_offset": 0, 00:37:04.094 "data_size": 65536 00:37:04.094 }, 00:37:04.094 { 00:37:04.094 "name": "BaseBdev2", 00:37:04.094 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:04.094 "is_configured": true, 00:37:04.094 "data_offset": 0, 00:37:04.094 "data_size": 65536 00:37:04.094 }, 00:37:04.094 { 00:37:04.094 "name": "BaseBdev3", 00:37:04.094 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:04.094 "is_configured": true, 00:37:04.094 "data_offset": 0, 00:37:04.094 "data_size": 65536 00:37:04.094 }, 00:37:04.094 { 00:37:04.094 "name": "BaseBdev4", 00:37:04.094 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:04.094 "is_configured": true, 00:37:04.094 "data_offset": 0, 00:37:04.094 "data_size": 65536 00:37:04.094 } 00:37:04.094 ] 00:37:04.094 }' 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:04.094 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.352 [2024-10-09 14:06:10.875537] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:04.352 14:06:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.612 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:04.612 "name": "Existed_Raid", 00:37:04.612 "aliases": [ 00:37:04.612 "f2e5ae09-0211-420f-a341-4d7dfb3d9500" 00:37:04.612 ], 00:37:04.612 "product_name": "Raid Volume", 00:37:04.612 "block_size": 512, 00:37:04.612 "num_blocks": 196608, 00:37:04.612 "uuid": "f2e5ae09-0211-420f-a341-4d7dfb3d9500", 00:37:04.612 "assigned_rate_limits": { 00:37:04.612 "rw_ios_per_sec": 0, 00:37:04.612 "rw_mbytes_per_sec": 0, 00:37:04.612 "r_mbytes_per_sec": 0, 00:37:04.612 "w_mbytes_per_sec": 0 00:37:04.612 }, 00:37:04.612 "claimed": false, 00:37:04.612 "zoned": false, 00:37:04.612 "supported_io_types": { 00:37:04.612 "read": true, 00:37:04.612 "write": true, 00:37:04.612 "unmap": false, 00:37:04.612 "flush": false, 00:37:04.612 "reset": true, 00:37:04.612 "nvme_admin": false, 00:37:04.612 "nvme_io": false, 00:37:04.612 "nvme_io_md": false, 00:37:04.612 "write_zeroes": true, 00:37:04.612 "zcopy": false, 00:37:04.612 "get_zone_info": false, 00:37:04.612 "zone_management": false, 00:37:04.612 "zone_append": false, 00:37:04.612 "compare": false, 00:37:04.612 "compare_and_write": false, 00:37:04.612 "abort": false, 00:37:04.612 "seek_hole": false, 00:37:04.612 "seek_data": false, 00:37:04.612 "copy": false, 00:37:04.612 "nvme_iov_md": false 00:37:04.612 }, 00:37:04.612 "driver_specific": { 00:37:04.612 "raid": { 00:37:04.612 "uuid": "f2e5ae09-0211-420f-a341-4d7dfb3d9500", 00:37:04.612 "strip_size_kb": 64, 00:37:04.612 "state": "online", 00:37:04.612 "raid_level": "raid5f", 00:37:04.612 "superblock": false, 00:37:04.612 "num_base_bdevs": 4, 00:37:04.612 "num_base_bdevs_discovered": 4, 00:37:04.612 "num_base_bdevs_operational": 4, 00:37:04.612 "base_bdevs_list": [ 00:37:04.612 { 00:37:04.612 "name": "NewBaseBdev", 00:37:04.612 "uuid": "b3493c60-9b80-4e18-963d-967d718ca0dc", 00:37:04.612 "is_configured": true, 00:37:04.612 "data_offset": 0, 00:37:04.612 "data_size": 65536 00:37:04.612 }, 00:37:04.612 { 00:37:04.612 "name": "BaseBdev2", 00:37:04.612 "uuid": "f2817130-3359-44b2-946c-9894958d71a2", 00:37:04.612 "is_configured": true, 00:37:04.612 "data_offset": 0, 00:37:04.612 "data_size": 65536 00:37:04.612 }, 00:37:04.612 { 00:37:04.612 "name": "BaseBdev3", 00:37:04.612 "uuid": "48e7c072-7d74-496b-88bb-0e07ab3a2fca", 00:37:04.612 "is_configured": true, 00:37:04.612 "data_offset": 0, 00:37:04.612 "data_size": 65536 00:37:04.612 }, 00:37:04.612 { 00:37:04.612 "name": "BaseBdev4", 00:37:04.612 "uuid": "88c201b1-4dac-4eca-8484-fb84a3dc86f3", 00:37:04.612 "is_configured": true, 00:37:04.612 "data_offset": 0, 00:37:04.612 "data_size": 65536 00:37:04.612 } 00:37:04.612 ] 00:37:04.612 } 00:37:04.612 } 00:37:04.612 }' 00:37:04.612 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:04.612 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:04.612 BaseBdev2 00:37:04.612 BaseBdev3 00:37:04.612 BaseBdev4' 00:37:04.612 14:06:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.612 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:04.871 [2024-10-09 14:06:11.195410] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:04.871 [2024-10-09 14:06:11.195444] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:04.871 [2024-10-09 14:06:11.195512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:04.871 [2024-10-09 14:06:11.195780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:04.871 [2024-10-09 14:06:11.195799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.871 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93669 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93669 ']' 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93669 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93669 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:04.872 killing process with pid 93669 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93669' 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93669 00:37:04.872 [2024-10-09 14:06:11.235947] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:04.872 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93669 00:37:04.872 [2024-10-09 14:06:11.276347] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:37:05.132 00:37:05.132 real 0m9.848s 00:37:05.132 user 0m17.035s 00:37:05.132 sys 0m2.153s 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:37:05.132 ************************************ 00:37:05.132 END TEST raid5f_state_function_test 00:37:05.132 ************************************ 00:37:05.132 14:06:11 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:37:05.132 14:06:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:37:05.132 14:06:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:05.132 14:06:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:05.132 ************************************ 00:37:05.132 START TEST raid5f_state_function_test_sb 00:37:05.132 ************************************ 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94323 00:37:05.132 Process raid pid: 94323 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94323' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94323 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94323 ']' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:05.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:05.132 14:06:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:05.391 [2024-10-09 14:06:11.708143] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:05.391 [2024-10-09 14:06:11.708329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:05.391 [2024-10-09 14:06:11.889226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.391 [2024-10-09 14:06:11.934226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.650 [2024-10-09 14:06:11.977564] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:05.650 [2024-10-09 14:06:11.977600] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.294 [2024-10-09 14:06:12.660312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:06.294 [2024-10-09 14:06:12.660365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:06.294 [2024-10-09 14:06:12.660379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:06.294 [2024-10-09 14:06:12.660393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:06.294 [2024-10-09 14:06:12.660401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:06.294 [2024-10-09 14:06:12.660418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:06.294 [2024-10-09 14:06:12.660426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:06.294 [2024-10-09 14:06:12.660438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.294 "name": "Existed_Raid", 00:37:06.294 "uuid": "8a101589-a74e-4a5e-89a7-24caefc734d2", 00:37:06.294 "strip_size_kb": 64, 00:37:06.294 "state": "configuring", 00:37:06.294 "raid_level": "raid5f", 00:37:06.294 "superblock": true, 00:37:06.294 "num_base_bdevs": 4, 00:37:06.294 "num_base_bdevs_discovered": 0, 00:37:06.294 "num_base_bdevs_operational": 4, 00:37:06.294 "base_bdevs_list": [ 00:37:06.294 { 00:37:06.294 "name": "BaseBdev1", 00:37:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.294 "is_configured": false, 00:37:06.294 "data_offset": 0, 00:37:06.294 "data_size": 0 00:37:06.294 }, 00:37:06.294 { 00:37:06.294 "name": "BaseBdev2", 00:37:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.294 "is_configured": false, 00:37:06.294 "data_offset": 0, 00:37:06.294 "data_size": 0 00:37:06.294 }, 00:37:06.294 { 00:37:06.294 "name": "BaseBdev3", 00:37:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.294 "is_configured": false, 00:37:06.294 "data_offset": 0, 00:37:06.294 "data_size": 0 00:37:06.294 }, 00:37:06.294 { 00:37:06.294 "name": "BaseBdev4", 00:37:06.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.294 "is_configured": false, 00:37:06.294 "data_offset": 0, 00:37:06.294 "data_size": 0 00:37:06.294 } 00:37:06.294 ] 00:37:06.294 }' 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.294 14:06:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.861 [2024-10-09 14:06:13.120282] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:06.861 [2024-10-09 14:06:13.120329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.861 [2024-10-09 14:06:13.132337] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:06.861 [2024-10-09 14:06:13.132380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:06.861 [2024-10-09 14:06:13.132390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:06.861 [2024-10-09 14:06:13.132403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:06.861 [2024-10-09 14:06:13.132411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:06.861 [2024-10-09 14:06:13.132424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:06.861 [2024-10-09 14:06:13.132431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:06.861 [2024-10-09 14:06:13.132444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.861 [2024-10-09 14:06:13.149614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:06.861 BaseBdev1 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:37:06.861 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.862 [ 00:37:06.862 { 00:37:06.862 "name": "BaseBdev1", 00:37:06.862 "aliases": [ 00:37:06.862 "dd318a84-a358-440c-b46c-ead85a96ffb4" 00:37:06.862 ], 00:37:06.862 "product_name": "Malloc disk", 00:37:06.862 "block_size": 512, 00:37:06.862 "num_blocks": 65536, 00:37:06.862 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:06.862 "assigned_rate_limits": { 00:37:06.862 "rw_ios_per_sec": 0, 00:37:06.862 "rw_mbytes_per_sec": 0, 00:37:06.862 "r_mbytes_per_sec": 0, 00:37:06.862 "w_mbytes_per_sec": 0 00:37:06.862 }, 00:37:06.862 "claimed": true, 00:37:06.862 "claim_type": "exclusive_write", 00:37:06.862 "zoned": false, 00:37:06.862 "supported_io_types": { 00:37:06.862 "read": true, 00:37:06.862 "write": true, 00:37:06.862 "unmap": true, 00:37:06.862 "flush": true, 00:37:06.862 "reset": true, 00:37:06.862 "nvme_admin": false, 00:37:06.862 "nvme_io": false, 00:37:06.862 "nvme_io_md": false, 00:37:06.862 "write_zeroes": true, 00:37:06.862 "zcopy": true, 00:37:06.862 "get_zone_info": false, 00:37:06.862 "zone_management": false, 00:37:06.862 "zone_append": false, 00:37:06.862 "compare": false, 00:37:06.862 "compare_and_write": false, 00:37:06.862 "abort": true, 00:37:06.862 "seek_hole": false, 00:37:06.862 "seek_data": false, 00:37:06.862 "copy": true, 00:37:06.862 "nvme_iov_md": false 00:37:06.862 }, 00:37:06.862 "memory_domains": [ 00:37:06.862 { 00:37:06.862 "dma_device_id": "system", 00:37:06.862 "dma_device_type": 1 00:37:06.862 }, 00:37:06.862 { 00:37:06.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:06.862 "dma_device_type": 2 00:37:06.862 } 00:37:06.862 ], 00:37:06.862 "driver_specific": {} 00:37:06.862 } 00:37:06.862 ] 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:06.862 "name": "Existed_Raid", 00:37:06.862 "uuid": "98d7c269-729e-4221-800a-6d762f2d6527", 00:37:06.862 "strip_size_kb": 64, 00:37:06.862 "state": "configuring", 00:37:06.862 "raid_level": "raid5f", 00:37:06.862 "superblock": true, 00:37:06.862 "num_base_bdevs": 4, 00:37:06.862 "num_base_bdevs_discovered": 1, 00:37:06.862 "num_base_bdevs_operational": 4, 00:37:06.862 "base_bdevs_list": [ 00:37:06.862 { 00:37:06.862 "name": "BaseBdev1", 00:37:06.862 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:06.862 "is_configured": true, 00:37:06.862 "data_offset": 2048, 00:37:06.862 "data_size": 63488 00:37:06.862 }, 00:37:06.862 { 00:37:06.862 "name": "BaseBdev2", 00:37:06.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.862 "is_configured": false, 00:37:06.862 "data_offset": 0, 00:37:06.862 "data_size": 0 00:37:06.862 }, 00:37:06.862 { 00:37:06.862 "name": "BaseBdev3", 00:37:06.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.862 "is_configured": false, 00:37:06.862 "data_offset": 0, 00:37:06.862 "data_size": 0 00:37:06.862 }, 00:37:06.862 { 00:37:06.862 "name": "BaseBdev4", 00:37:06.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.862 "is_configured": false, 00:37:06.862 "data_offset": 0, 00:37:06.862 "data_size": 0 00:37:06.862 } 00:37:06.862 ] 00:37:06.862 }' 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:06.862 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.121 [2024-10-09 14:06:13.653786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:07.121 [2024-10-09 14:06:13.653940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.121 [2024-10-09 14:06:13.665846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:07.121 [2024-10-09 14:06:13.668179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:07.121 [2024-10-09 14:06:13.668319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:07.121 [2024-10-09 14:06:13.668421] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:37:07.121 [2024-10-09 14:06:13.668468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:37:07.121 [2024-10-09 14:06:13.668612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:37:07.121 [2024-10-09 14:06:13.668656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:07.121 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.381 "name": "Existed_Raid", 00:37:07.381 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:07.381 "strip_size_kb": 64, 00:37:07.381 "state": "configuring", 00:37:07.381 "raid_level": "raid5f", 00:37:07.381 "superblock": true, 00:37:07.381 "num_base_bdevs": 4, 00:37:07.381 "num_base_bdevs_discovered": 1, 00:37:07.381 "num_base_bdevs_operational": 4, 00:37:07.381 "base_bdevs_list": [ 00:37:07.381 { 00:37:07.381 "name": "BaseBdev1", 00:37:07.381 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:07.381 "is_configured": true, 00:37:07.381 "data_offset": 2048, 00:37:07.381 "data_size": 63488 00:37:07.381 }, 00:37:07.381 { 00:37:07.381 "name": "BaseBdev2", 00:37:07.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.381 "is_configured": false, 00:37:07.381 "data_offset": 0, 00:37:07.381 "data_size": 0 00:37:07.381 }, 00:37:07.381 { 00:37:07.381 "name": "BaseBdev3", 00:37:07.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.381 "is_configured": false, 00:37:07.381 "data_offset": 0, 00:37:07.381 "data_size": 0 00:37:07.381 }, 00:37:07.381 { 00:37:07.381 "name": "BaseBdev4", 00:37:07.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.381 "is_configured": false, 00:37:07.381 "data_offset": 0, 00:37:07.381 "data_size": 0 00:37:07.381 } 00:37:07.381 ] 00:37:07.381 }' 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.381 14:06:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.639 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:07.639 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.639 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.640 [2024-10-09 14:06:14.152003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:07.640 BaseBdev2 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.640 [ 00:37:07.640 { 00:37:07.640 "name": "BaseBdev2", 00:37:07.640 "aliases": [ 00:37:07.640 "675098bf-3f9e-4475-ab3a-eb39cdc6963c" 00:37:07.640 ], 00:37:07.640 "product_name": "Malloc disk", 00:37:07.640 "block_size": 512, 00:37:07.640 "num_blocks": 65536, 00:37:07.640 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:07.640 "assigned_rate_limits": { 00:37:07.640 "rw_ios_per_sec": 0, 00:37:07.640 "rw_mbytes_per_sec": 0, 00:37:07.640 "r_mbytes_per_sec": 0, 00:37:07.640 "w_mbytes_per_sec": 0 00:37:07.640 }, 00:37:07.640 "claimed": true, 00:37:07.640 "claim_type": "exclusive_write", 00:37:07.640 "zoned": false, 00:37:07.640 "supported_io_types": { 00:37:07.640 "read": true, 00:37:07.640 "write": true, 00:37:07.640 "unmap": true, 00:37:07.640 "flush": true, 00:37:07.640 "reset": true, 00:37:07.640 "nvme_admin": false, 00:37:07.640 "nvme_io": false, 00:37:07.640 "nvme_io_md": false, 00:37:07.640 "write_zeroes": true, 00:37:07.640 "zcopy": true, 00:37:07.640 "get_zone_info": false, 00:37:07.640 "zone_management": false, 00:37:07.640 "zone_append": false, 00:37:07.640 "compare": false, 00:37:07.640 "compare_and_write": false, 00:37:07.640 "abort": true, 00:37:07.640 "seek_hole": false, 00:37:07.640 "seek_data": false, 00:37:07.640 "copy": true, 00:37:07.640 "nvme_iov_md": false 00:37:07.640 }, 00:37:07.640 "memory_domains": [ 00:37:07.640 { 00:37:07.640 "dma_device_id": "system", 00:37:07.640 "dma_device_type": 1 00:37:07.640 }, 00:37:07.640 { 00:37:07.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:07.640 "dma_device_type": 2 00:37:07.640 } 00:37:07.640 ], 00:37:07.640 "driver_specific": {} 00:37:07.640 } 00:37:07.640 ] 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.640 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.899 "name": "Existed_Raid", 00:37:07.899 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:07.899 "strip_size_kb": 64, 00:37:07.899 "state": "configuring", 00:37:07.899 "raid_level": "raid5f", 00:37:07.899 "superblock": true, 00:37:07.899 "num_base_bdevs": 4, 00:37:07.899 "num_base_bdevs_discovered": 2, 00:37:07.899 "num_base_bdevs_operational": 4, 00:37:07.899 "base_bdevs_list": [ 00:37:07.899 { 00:37:07.899 "name": "BaseBdev1", 00:37:07.899 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:07.899 "is_configured": true, 00:37:07.899 "data_offset": 2048, 00:37:07.899 "data_size": 63488 00:37:07.899 }, 00:37:07.899 { 00:37:07.899 "name": "BaseBdev2", 00:37:07.899 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:07.899 "is_configured": true, 00:37:07.899 "data_offset": 2048, 00:37:07.899 "data_size": 63488 00:37:07.899 }, 00:37:07.899 { 00:37:07.899 "name": "BaseBdev3", 00:37:07.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.899 "is_configured": false, 00:37:07.899 "data_offset": 0, 00:37:07.899 "data_size": 0 00:37:07.899 }, 00:37:07.899 { 00:37:07.899 "name": "BaseBdev4", 00:37:07.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:07.899 "is_configured": false, 00:37:07.899 "data_offset": 0, 00:37:07.899 "data_size": 0 00:37:07.899 } 00:37:07.899 ] 00:37:07.899 }' 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.899 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.158 [2024-10-09 14:06:14.635156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:08.158 BaseBdev3 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.158 [ 00:37:08.158 { 00:37:08.158 "name": "BaseBdev3", 00:37:08.158 "aliases": [ 00:37:08.158 "85f263db-fb3a-43e6-90c4-d122e5576f4c" 00:37:08.158 ], 00:37:08.158 "product_name": "Malloc disk", 00:37:08.158 "block_size": 512, 00:37:08.158 "num_blocks": 65536, 00:37:08.158 "uuid": "85f263db-fb3a-43e6-90c4-d122e5576f4c", 00:37:08.158 "assigned_rate_limits": { 00:37:08.158 "rw_ios_per_sec": 0, 00:37:08.158 "rw_mbytes_per_sec": 0, 00:37:08.158 "r_mbytes_per_sec": 0, 00:37:08.158 "w_mbytes_per_sec": 0 00:37:08.158 }, 00:37:08.158 "claimed": true, 00:37:08.158 "claim_type": "exclusive_write", 00:37:08.158 "zoned": false, 00:37:08.158 "supported_io_types": { 00:37:08.158 "read": true, 00:37:08.158 "write": true, 00:37:08.158 "unmap": true, 00:37:08.158 "flush": true, 00:37:08.158 "reset": true, 00:37:08.158 "nvme_admin": false, 00:37:08.158 "nvme_io": false, 00:37:08.158 "nvme_io_md": false, 00:37:08.158 "write_zeroes": true, 00:37:08.158 "zcopy": true, 00:37:08.158 "get_zone_info": false, 00:37:08.158 "zone_management": false, 00:37:08.158 "zone_append": false, 00:37:08.158 "compare": false, 00:37:08.158 "compare_and_write": false, 00:37:08.158 "abort": true, 00:37:08.158 "seek_hole": false, 00:37:08.158 "seek_data": false, 00:37:08.158 "copy": true, 00:37:08.158 "nvme_iov_md": false 00:37:08.158 }, 00:37:08.158 "memory_domains": [ 00:37:08.158 { 00:37:08.158 "dma_device_id": "system", 00:37:08.158 "dma_device_type": 1 00:37:08.158 }, 00:37:08.158 { 00:37:08.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.158 "dma_device_type": 2 00:37:08.158 } 00:37:08.158 ], 00:37:08.158 "driver_specific": {} 00:37:08.158 } 00:37:08.158 ] 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.158 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.416 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:08.416 "name": "Existed_Raid", 00:37:08.416 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:08.416 "strip_size_kb": 64, 00:37:08.416 "state": "configuring", 00:37:08.416 "raid_level": "raid5f", 00:37:08.416 "superblock": true, 00:37:08.416 "num_base_bdevs": 4, 00:37:08.416 "num_base_bdevs_discovered": 3, 00:37:08.416 "num_base_bdevs_operational": 4, 00:37:08.416 "base_bdevs_list": [ 00:37:08.416 { 00:37:08.416 "name": "BaseBdev1", 00:37:08.416 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:08.416 "is_configured": true, 00:37:08.416 "data_offset": 2048, 00:37:08.416 "data_size": 63488 00:37:08.416 }, 00:37:08.416 { 00:37:08.416 "name": "BaseBdev2", 00:37:08.416 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:08.416 "is_configured": true, 00:37:08.416 "data_offset": 2048, 00:37:08.416 "data_size": 63488 00:37:08.416 }, 00:37:08.416 { 00:37:08.416 "name": "BaseBdev3", 00:37:08.416 "uuid": "85f263db-fb3a-43e6-90c4-d122e5576f4c", 00:37:08.416 "is_configured": true, 00:37:08.416 "data_offset": 2048, 00:37:08.416 "data_size": 63488 00:37:08.416 }, 00:37:08.416 { 00:37:08.416 "name": "BaseBdev4", 00:37:08.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.416 "is_configured": false, 00:37:08.416 "data_offset": 0, 00:37:08.416 "data_size": 0 00:37:08.416 } 00:37:08.416 ] 00:37:08.416 }' 00:37:08.416 14:06:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:08.416 14:06:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.675 [2024-10-09 14:06:15.142294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:08.675 [2024-10-09 14:06:15.142698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:37:08.675 [2024-10-09 14:06:15.142826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:08.675 BaseBdev4 00:37:08.675 [2024-10-09 14:06:15.143171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.675 [2024-10-09 14:06:15.143762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:37:08.675 [2024-10-09 14:06:15.143786] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:37:08.675 [2024-10-09 14:06:15.143909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.675 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.675 [ 00:37:08.675 { 00:37:08.675 "name": "BaseBdev4", 00:37:08.675 "aliases": [ 00:37:08.675 "2cb20361-a17e-44ad-9188-c14452762091" 00:37:08.675 ], 00:37:08.675 "product_name": "Malloc disk", 00:37:08.675 "block_size": 512, 00:37:08.675 "num_blocks": 65536, 00:37:08.675 "uuid": "2cb20361-a17e-44ad-9188-c14452762091", 00:37:08.675 "assigned_rate_limits": { 00:37:08.675 "rw_ios_per_sec": 0, 00:37:08.675 "rw_mbytes_per_sec": 0, 00:37:08.675 "r_mbytes_per_sec": 0, 00:37:08.675 "w_mbytes_per_sec": 0 00:37:08.675 }, 00:37:08.675 "claimed": true, 00:37:08.675 "claim_type": "exclusive_write", 00:37:08.675 "zoned": false, 00:37:08.675 "supported_io_types": { 00:37:08.675 "read": true, 00:37:08.675 "write": true, 00:37:08.675 "unmap": true, 00:37:08.675 "flush": true, 00:37:08.675 "reset": true, 00:37:08.675 "nvme_admin": false, 00:37:08.675 "nvme_io": false, 00:37:08.675 "nvme_io_md": false, 00:37:08.675 "write_zeroes": true, 00:37:08.675 "zcopy": true, 00:37:08.675 "get_zone_info": false, 00:37:08.675 "zone_management": false, 00:37:08.675 "zone_append": false, 00:37:08.675 "compare": false, 00:37:08.675 "compare_and_write": false, 00:37:08.675 "abort": true, 00:37:08.675 "seek_hole": false, 00:37:08.675 "seek_data": false, 00:37:08.675 "copy": true, 00:37:08.675 "nvme_iov_md": false 00:37:08.675 }, 00:37:08.675 "memory_domains": [ 00:37:08.675 { 00:37:08.675 "dma_device_id": "system", 00:37:08.675 "dma_device_type": 1 00:37:08.676 }, 00:37:08.676 { 00:37:08.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:08.676 "dma_device_type": 2 00:37:08.676 } 00:37:08.676 ], 00:37:08.676 "driver_specific": {} 00:37:08.676 } 00:37:08.676 ] 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:08.676 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:08.934 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:08.934 "name": "Existed_Raid", 00:37:08.934 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:08.934 "strip_size_kb": 64, 00:37:08.934 "state": "online", 00:37:08.934 "raid_level": "raid5f", 00:37:08.934 "superblock": true, 00:37:08.934 "num_base_bdevs": 4, 00:37:08.934 "num_base_bdevs_discovered": 4, 00:37:08.934 "num_base_bdevs_operational": 4, 00:37:08.934 "base_bdevs_list": [ 00:37:08.934 { 00:37:08.934 "name": "BaseBdev1", 00:37:08.934 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:08.934 "is_configured": true, 00:37:08.934 "data_offset": 2048, 00:37:08.934 "data_size": 63488 00:37:08.934 }, 00:37:08.934 { 00:37:08.934 "name": "BaseBdev2", 00:37:08.934 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:08.934 "is_configured": true, 00:37:08.934 "data_offset": 2048, 00:37:08.934 "data_size": 63488 00:37:08.934 }, 00:37:08.934 { 00:37:08.934 "name": "BaseBdev3", 00:37:08.934 "uuid": "85f263db-fb3a-43e6-90c4-d122e5576f4c", 00:37:08.934 "is_configured": true, 00:37:08.934 "data_offset": 2048, 00:37:08.934 "data_size": 63488 00:37:08.934 }, 00:37:08.934 { 00:37:08.934 "name": "BaseBdev4", 00:37:08.934 "uuid": "2cb20361-a17e-44ad-9188-c14452762091", 00:37:08.934 "is_configured": true, 00:37:08.934 "data_offset": 2048, 00:37:08.934 "data_size": 63488 00:37:08.934 } 00:37:08.934 ] 00:37:08.934 }' 00:37:08.934 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:08.934 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:09.193 [2024-10-09 14:06:15.626719] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:09.193 "name": "Existed_Raid", 00:37:09.193 "aliases": [ 00:37:09.193 "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2" 00:37:09.193 ], 00:37:09.193 "product_name": "Raid Volume", 00:37:09.193 "block_size": 512, 00:37:09.193 "num_blocks": 190464, 00:37:09.193 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:09.193 "assigned_rate_limits": { 00:37:09.193 "rw_ios_per_sec": 0, 00:37:09.193 "rw_mbytes_per_sec": 0, 00:37:09.193 "r_mbytes_per_sec": 0, 00:37:09.193 "w_mbytes_per_sec": 0 00:37:09.193 }, 00:37:09.193 "claimed": false, 00:37:09.193 "zoned": false, 00:37:09.193 "supported_io_types": { 00:37:09.193 "read": true, 00:37:09.193 "write": true, 00:37:09.193 "unmap": false, 00:37:09.193 "flush": false, 00:37:09.193 "reset": true, 00:37:09.193 "nvme_admin": false, 00:37:09.193 "nvme_io": false, 00:37:09.193 "nvme_io_md": false, 00:37:09.193 "write_zeroes": true, 00:37:09.193 "zcopy": false, 00:37:09.193 "get_zone_info": false, 00:37:09.193 "zone_management": false, 00:37:09.193 "zone_append": false, 00:37:09.193 "compare": false, 00:37:09.193 "compare_and_write": false, 00:37:09.193 "abort": false, 00:37:09.193 "seek_hole": false, 00:37:09.193 "seek_data": false, 00:37:09.193 "copy": false, 00:37:09.193 "nvme_iov_md": false 00:37:09.193 }, 00:37:09.193 "driver_specific": { 00:37:09.193 "raid": { 00:37:09.193 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:09.193 "strip_size_kb": 64, 00:37:09.193 "state": "online", 00:37:09.193 "raid_level": "raid5f", 00:37:09.193 "superblock": true, 00:37:09.193 "num_base_bdevs": 4, 00:37:09.193 "num_base_bdevs_discovered": 4, 00:37:09.193 "num_base_bdevs_operational": 4, 00:37:09.193 "base_bdevs_list": [ 00:37:09.193 { 00:37:09.193 "name": "BaseBdev1", 00:37:09.193 "uuid": "dd318a84-a358-440c-b46c-ead85a96ffb4", 00:37:09.193 "is_configured": true, 00:37:09.193 "data_offset": 2048, 00:37:09.193 "data_size": 63488 00:37:09.193 }, 00:37:09.193 { 00:37:09.193 "name": "BaseBdev2", 00:37:09.193 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:09.193 "is_configured": true, 00:37:09.193 "data_offset": 2048, 00:37:09.193 "data_size": 63488 00:37:09.193 }, 00:37:09.193 { 00:37:09.193 "name": "BaseBdev3", 00:37:09.193 "uuid": "85f263db-fb3a-43e6-90c4-d122e5576f4c", 00:37:09.193 "is_configured": true, 00:37:09.193 "data_offset": 2048, 00:37:09.193 "data_size": 63488 00:37:09.193 }, 00:37:09.193 { 00:37:09.193 "name": "BaseBdev4", 00:37:09.193 "uuid": "2cb20361-a17e-44ad-9188-c14452762091", 00:37:09.193 "is_configured": true, 00:37:09.193 "data_offset": 2048, 00:37:09.193 "data_size": 63488 00:37:09.193 } 00:37:09.193 ] 00:37:09.193 } 00:37:09.193 } 00:37:09.193 }' 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:09.193 BaseBdev2 00:37:09.193 BaseBdev3 00:37:09.193 BaseBdev4' 00:37:09.193 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 [2024-10-09 14:06:15.946521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.452 14:06:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.710 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:09.710 "name": "Existed_Raid", 00:37:09.710 "uuid": "c62b4fc2-278b-4bc5-a8dc-527eaf0bf5b2", 00:37:09.710 "strip_size_kb": 64, 00:37:09.710 "state": "online", 00:37:09.710 "raid_level": "raid5f", 00:37:09.710 "superblock": true, 00:37:09.710 "num_base_bdevs": 4, 00:37:09.710 "num_base_bdevs_discovered": 3, 00:37:09.710 "num_base_bdevs_operational": 3, 00:37:09.710 "base_bdevs_list": [ 00:37:09.710 { 00:37:09.710 "name": null, 00:37:09.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:09.710 "is_configured": false, 00:37:09.710 "data_offset": 0, 00:37:09.710 "data_size": 63488 00:37:09.710 }, 00:37:09.710 { 00:37:09.710 "name": "BaseBdev2", 00:37:09.710 "uuid": "675098bf-3f9e-4475-ab3a-eb39cdc6963c", 00:37:09.710 "is_configured": true, 00:37:09.710 "data_offset": 2048, 00:37:09.710 "data_size": 63488 00:37:09.710 }, 00:37:09.710 { 00:37:09.710 "name": "BaseBdev3", 00:37:09.710 "uuid": "85f263db-fb3a-43e6-90c4-d122e5576f4c", 00:37:09.710 "is_configured": true, 00:37:09.710 "data_offset": 2048, 00:37:09.710 "data_size": 63488 00:37:09.710 }, 00:37:09.710 { 00:37:09.710 "name": "BaseBdev4", 00:37:09.710 "uuid": "2cb20361-a17e-44ad-9188-c14452762091", 00:37:09.710 "is_configured": true, 00:37:09.710 "data_offset": 2048, 00:37:09.710 "data_size": 63488 00:37:09.710 } 00:37:09.710 ] 00:37:09.710 }' 00:37:09.710 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:09.710 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.969 [2024-10-09 14:06:16.466743] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:09.969 [2024-10-09 14:06:16.467006] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:09.969 [2024-10-09 14:06:16.478874] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:09.969 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 [2024-10-09 14:06:16.530894] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 [2024-10-09 14:06:16.594726] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:37:10.229 [2024-10-09 14:06:16.594869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 BaseBdev2 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 [ 00:37:10.229 { 00:37:10.229 "name": "BaseBdev2", 00:37:10.229 "aliases": [ 00:37:10.229 "2bcf9331-1e21-479f-9893-e977124b73b8" 00:37:10.229 ], 00:37:10.229 "product_name": "Malloc disk", 00:37:10.229 "block_size": 512, 00:37:10.229 "num_blocks": 65536, 00:37:10.229 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:10.229 "assigned_rate_limits": { 00:37:10.229 "rw_ios_per_sec": 0, 00:37:10.229 "rw_mbytes_per_sec": 0, 00:37:10.229 "r_mbytes_per_sec": 0, 00:37:10.229 "w_mbytes_per_sec": 0 00:37:10.229 }, 00:37:10.229 "claimed": false, 00:37:10.229 "zoned": false, 00:37:10.229 "supported_io_types": { 00:37:10.229 "read": true, 00:37:10.229 "write": true, 00:37:10.229 "unmap": true, 00:37:10.229 "flush": true, 00:37:10.229 "reset": true, 00:37:10.229 "nvme_admin": false, 00:37:10.229 "nvme_io": false, 00:37:10.229 "nvme_io_md": false, 00:37:10.229 "write_zeroes": true, 00:37:10.229 "zcopy": true, 00:37:10.229 "get_zone_info": false, 00:37:10.229 "zone_management": false, 00:37:10.229 "zone_append": false, 00:37:10.229 "compare": false, 00:37:10.229 "compare_and_write": false, 00:37:10.229 "abort": true, 00:37:10.229 "seek_hole": false, 00:37:10.229 "seek_data": false, 00:37:10.229 "copy": true, 00:37:10.229 "nvme_iov_md": false 00:37:10.229 }, 00:37:10.229 "memory_domains": [ 00:37:10.229 { 00:37:10.229 "dma_device_id": "system", 00:37:10.229 "dma_device_type": 1 00:37:10.229 }, 00:37:10.229 { 00:37:10.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.229 "dma_device_type": 2 00:37:10.229 } 00:37:10.229 ], 00:37:10.229 "driver_specific": {} 00:37:10.229 } 00:37:10.229 ] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.229 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.229 BaseBdev3 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.230 [ 00:37:10.230 { 00:37:10.230 "name": "BaseBdev3", 00:37:10.230 "aliases": [ 00:37:10.230 "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7" 00:37:10.230 ], 00:37:10.230 "product_name": "Malloc disk", 00:37:10.230 "block_size": 512, 00:37:10.230 "num_blocks": 65536, 00:37:10.230 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:10.230 "assigned_rate_limits": { 00:37:10.230 "rw_ios_per_sec": 0, 00:37:10.230 "rw_mbytes_per_sec": 0, 00:37:10.230 "r_mbytes_per_sec": 0, 00:37:10.230 "w_mbytes_per_sec": 0 00:37:10.230 }, 00:37:10.230 "claimed": false, 00:37:10.230 "zoned": false, 00:37:10.230 "supported_io_types": { 00:37:10.230 "read": true, 00:37:10.230 "write": true, 00:37:10.230 "unmap": true, 00:37:10.230 "flush": true, 00:37:10.230 "reset": true, 00:37:10.230 "nvme_admin": false, 00:37:10.230 "nvme_io": false, 00:37:10.230 "nvme_io_md": false, 00:37:10.230 "write_zeroes": true, 00:37:10.230 "zcopy": true, 00:37:10.230 "get_zone_info": false, 00:37:10.230 "zone_management": false, 00:37:10.230 "zone_append": false, 00:37:10.230 "compare": false, 00:37:10.230 "compare_and_write": false, 00:37:10.230 "abort": true, 00:37:10.230 "seek_hole": false, 00:37:10.230 "seek_data": false, 00:37:10.230 "copy": true, 00:37:10.230 "nvme_iov_md": false 00:37:10.230 }, 00:37:10.230 "memory_domains": [ 00:37:10.230 { 00:37:10.230 "dma_device_id": "system", 00:37:10.230 "dma_device_type": 1 00:37:10.230 }, 00:37:10.230 { 00:37:10.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.230 "dma_device_type": 2 00:37:10.230 } 00:37:10.230 ], 00:37:10.230 "driver_specific": {} 00:37:10.230 } 00:37:10.230 ] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.230 BaseBdev4 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.230 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.489 [ 00:37:10.489 { 00:37:10.489 "name": "BaseBdev4", 00:37:10.489 "aliases": [ 00:37:10.489 "01d1b67d-ae1a-4d7c-a318-7a20db6f5676" 00:37:10.489 ], 00:37:10.489 "product_name": "Malloc disk", 00:37:10.489 "block_size": 512, 00:37:10.489 "num_blocks": 65536, 00:37:10.489 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:10.489 "assigned_rate_limits": { 00:37:10.489 "rw_ios_per_sec": 0, 00:37:10.489 "rw_mbytes_per_sec": 0, 00:37:10.489 "r_mbytes_per_sec": 0, 00:37:10.489 "w_mbytes_per_sec": 0 00:37:10.489 }, 00:37:10.489 "claimed": false, 00:37:10.489 "zoned": false, 00:37:10.489 "supported_io_types": { 00:37:10.489 "read": true, 00:37:10.489 "write": true, 00:37:10.489 "unmap": true, 00:37:10.489 "flush": true, 00:37:10.489 "reset": true, 00:37:10.489 "nvme_admin": false, 00:37:10.489 "nvme_io": false, 00:37:10.489 "nvme_io_md": false, 00:37:10.489 "write_zeroes": true, 00:37:10.489 "zcopy": true, 00:37:10.489 "get_zone_info": false, 00:37:10.489 "zone_management": false, 00:37:10.489 "zone_append": false, 00:37:10.489 "compare": false, 00:37:10.489 "compare_and_write": false, 00:37:10.489 "abort": true, 00:37:10.489 "seek_hole": false, 00:37:10.489 "seek_data": false, 00:37:10.489 "copy": true, 00:37:10.489 "nvme_iov_md": false 00:37:10.489 }, 00:37:10.489 "memory_domains": [ 00:37:10.489 { 00:37:10.489 "dma_device_id": "system", 00:37:10.489 "dma_device_type": 1 00:37:10.489 }, 00:37:10.489 { 00:37:10.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:10.489 "dma_device_type": 2 00:37:10.489 } 00:37:10.489 ], 00:37:10.489 "driver_specific": {} 00:37:10.489 } 00:37:10.489 ] 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.489 [2024-10-09 14:06:16.797150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:10.489 [2024-10-09 14:06:16.797197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:10.489 [2024-10-09 14:06:16.797219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:10.489 [2024-10-09 14:06:16.799502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:10.489 [2024-10-09 14:06:16.799567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:10.489 "name": "Existed_Raid", 00:37:10.489 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:10.489 "strip_size_kb": 64, 00:37:10.489 "state": "configuring", 00:37:10.489 "raid_level": "raid5f", 00:37:10.489 "superblock": true, 00:37:10.489 "num_base_bdevs": 4, 00:37:10.489 "num_base_bdevs_discovered": 3, 00:37:10.489 "num_base_bdevs_operational": 4, 00:37:10.489 "base_bdevs_list": [ 00:37:10.489 { 00:37:10.489 "name": "BaseBdev1", 00:37:10.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.489 "is_configured": false, 00:37:10.489 "data_offset": 0, 00:37:10.489 "data_size": 0 00:37:10.489 }, 00:37:10.489 { 00:37:10.489 "name": "BaseBdev2", 00:37:10.489 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:10.489 "is_configured": true, 00:37:10.489 "data_offset": 2048, 00:37:10.489 "data_size": 63488 00:37:10.489 }, 00:37:10.489 { 00:37:10.489 "name": "BaseBdev3", 00:37:10.489 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:10.489 "is_configured": true, 00:37:10.489 "data_offset": 2048, 00:37:10.489 "data_size": 63488 00:37:10.489 }, 00:37:10.489 { 00:37:10.489 "name": "BaseBdev4", 00:37:10.489 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:10.489 "is_configured": true, 00:37:10.489 "data_offset": 2048, 00:37:10.489 "data_size": 63488 00:37:10.489 } 00:37:10.489 ] 00:37:10.489 }' 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:10.489 14:06:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.748 [2024-10-09 14:06:17.237231] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:10.748 "name": "Existed_Raid", 00:37:10.748 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:10.748 "strip_size_kb": 64, 00:37:10.748 "state": "configuring", 00:37:10.748 "raid_level": "raid5f", 00:37:10.748 "superblock": true, 00:37:10.748 "num_base_bdevs": 4, 00:37:10.748 "num_base_bdevs_discovered": 2, 00:37:10.748 "num_base_bdevs_operational": 4, 00:37:10.748 "base_bdevs_list": [ 00:37:10.748 { 00:37:10.748 "name": "BaseBdev1", 00:37:10.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:10.748 "is_configured": false, 00:37:10.748 "data_offset": 0, 00:37:10.748 "data_size": 0 00:37:10.748 }, 00:37:10.748 { 00:37:10.748 "name": null, 00:37:10.748 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:10.748 "is_configured": false, 00:37:10.748 "data_offset": 0, 00:37:10.748 "data_size": 63488 00:37:10.748 }, 00:37:10.748 { 00:37:10.748 "name": "BaseBdev3", 00:37:10.748 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:10.748 "is_configured": true, 00:37:10.748 "data_offset": 2048, 00:37:10.748 "data_size": 63488 00:37:10.748 }, 00:37:10.748 { 00:37:10.748 "name": "BaseBdev4", 00:37:10.748 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:10.748 "is_configured": true, 00:37:10.748 "data_offset": 2048, 00:37:10.748 "data_size": 63488 00:37:10.748 } 00:37:10.748 ] 00:37:10.748 }' 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:10.748 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.316 [2024-10-09 14:06:17.744258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:11.316 BaseBdev1 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.316 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.316 [ 00:37:11.316 { 00:37:11.316 "name": "BaseBdev1", 00:37:11.316 "aliases": [ 00:37:11.316 "37fa3b47-12a8-4de6-83ba-393dfa70b5a8" 00:37:11.316 ], 00:37:11.316 "product_name": "Malloc disk", 00:37:11.316 "block_size": 512, 00:37:11.316 "num_blocks": 65536, 00:37:11.316 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:11.316 "assigned_rate_limits": { 00:37:11.317 "rw_ios_per_sec": 0, 00:37:11.317 "rw_mbytes_per_sec": 0, 00:37:11.317 "r_mbytes_per_sec": 0, 00:37:11.317 "w_mbytes_per_sec": 0 00:37:11.317 }, 00:37:11.317 "claimed": true, 00:37:11.317 "claim_type": "exclusive_write", 00:37:11.317 "zoned": false, 00:37:11.317 "supported_io_types": { 00:37:11.317 "read": true, 00:37:11.317 "write": true, 00:37:11.317 "unmap": true, 00:37:11.317 "flush": true, 00:37:11.317 "reset": true, 00:37:11.317 "nvme_admin": false, 00:37:11.317 "nvme_io": false, 00:37:11.317 "nvme_io_md": false, 00:37:11.317 "write_zeroes": true, 00:37:11.317 "zcopy": true, 00:37:11.317 "get_zone_info": false, 00:37:11.317 "zone_management": false, 00:37:11.317 "zone_append": false, 00:37:11.317 "compare": false, 00:37:11.317 "compare_and_write": false, 00:37:11.317 "abort": true, 00:37:11.317 "seek_hole": false, 00:37:11.317 "seek_data": false, 00:37:11.317 "copy": true, 00:37:11.317 "nvme_iov_md": false 00:37:11.317 }, 00:37:11.317 "memory_domains": [ 00:37:11.317 { 00:37:11.317 "dma_device_id": "system", 00:37:11.317 "dma_device_type": 1 00:37:11.317 }, 00:37:11.317 { 00:37:11.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:11.317 "dma_device_type": 2 00:37:11.317 } 00:37:11.317 ], 00:37:11.317 "driver_specific": {} 00:37:11.317 } 00:37:11.317 ] 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:11.317 "name": "Existed_Raid", 00:37:11.317 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:11.317 "strip_size_kb": 64, 00:37:11.317 "state": "configuring", 00:37:11.317 "raid_level": "raid5f", 00:37:11.317 "superblock": true, 00:37:11.317 "num_base_bdevs": 4, 00:37:11.317 "num_base_bdevs_discovered": 3, 00:37:11.317 "num_base_bdevs_operational": 4, 00:37:11.317 "base_bdevs_list": [ 00:37:11.317 { 00:37:11.317 "name": "BaseBdev1", 00:37:11.317 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:11.317 "is_configured": true, 00:37:11.317 "data_offset": 2048, 00:37:11.317 "data_size": 63488 00:37:11.317 }, 00:37:11.317 { 00:37:11.317 "name": null, 00:37:11.317 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:11.317 "is_configured": false, 00:37:11.317 "data_offset": 0, 00:37:11.317 "data_size": 63488 00:37:11.317 }, 00:37:11.317 { 00:37:11.317 "name": "BaseBdev3", 00:37:11.317 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:11.317 "is_configured": true, 00:37:11.317 "data_offset": 2048, 00:37:11.317 "data_size": 63488 00:37:11.317 }, 00:37:11.317 { 00:37:11.317 "name": "BaseBdev4", 00:37:11.317 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:11.317 "is_configured": true, 00:37:11.317 "data_offset": 2048, 00:37:11.317 "data_size": 63488 00:37:11.317 } 00:37:11.317 ] 00:37:11.317 }' 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:11.317 14:06:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.884 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.884 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.884 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.885 [2024-10-09 14:06:18.284402] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:11.885 "name": "Existed_Raid", 00:37:11.885 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:11.885 "strip_size_kb": 64, 00:37:11.885 "state": "configuring", 00:37:11.885 "raid_level": "raid5f", 00:37:11.885 "superblock": true, 00:37:11.885 "num_base_bdevs": 4, 00:37:11.885 "num_base_bdevs_discovered": 2, 00:37:11.885 "num_base_bdevs_operational": 4, 00:37:11.885 "base_bdevs_list": [ 00:37:11.885 { 00:37:11.885 "name": "BaseBdev1", 00:37:11.885 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:11.885 "is_configured": true, 00:37:11.885 "data_offset": 2048, 00:37:11.885 "data_size": 63488 00:37:11.885 }, 00:37:11.885 { 00:37:11.885 "name": null, 00:37:11.885 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:11.885 "is_configured": false, 00:37:11.885 "data_offset": 0, 00:37:11.885 "data_size": 63488 00:37:11.885 }, 00:37:11.885 { 00:37:11.885 "name": null, 00:37:11.885 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:11.885 "is_configured": false, 00:37:11.885 "data_offset": 0, 00:37:11.885 "data_size": 63488 00:37:11.885 }, 00:37:11.885 { 00:37:11.885 "name": "BaseBdev4", 00:37:11.885 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:11.885 "is_configured": true, 00:37:11.885 "data_offset": 2048, 00:37:11.885 "data_size": 63488 00:37:11.885 } 00:37:11.885 ] 00:37:11.885 }' 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:11.885 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.452 [2024-10-09 14:06:18.792565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.452 "name": "Existed_Raid", 00:37:12.452 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:12.452 "strip_size_kb": 64, 00:37:12.452 "state": "configuring", 00:37:12.452 "raid_level": "raid5f", 00:37:12.452 "superblock": true, 00:37:12.452 "num_base_bdevs": 4, 00:37:12.452 "num_base_bdevs_discovered": 3, 00:37:12.452 "num_base_bdevs_operational": 4, 00:37:12.452 "base_bdevs_list": [ 00:37:12.452 { 00:37:12.452 "name": "BaseBdev1", 00:37:12.452 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:12.452 "is_configured": true, 00:37:12.452 "data_offset": 2048, 00:37:12.452 "data_size": 63488 00:37:12.452 }, 00:37:12.452 { 00:37:12.452 "name": null, 00:37:12.452 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:12.452 "is_configured": false, 00:37:12.452 "data_offset": 0, 00:37:12.452 "data_size": 63488 00:37:12.452 }, 00:37:12.452 { 00:37:12.452 "name": "BaseBdev3", 00:37:12.452 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:12.452 "is_configured": true, 00:37:12.452 "data_offset": 2048, 00:37:12.452 "data_size": 63488 00:37:12.452 }, 00:37:12.452 { 00:37:12.452 "name": "BaseBdev4", 00:37:12.452 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:12.452 "is_configured": true, 00:37:12.452 "data_offset": 2048, 00:37:12.452 "data_size": 63488 00:37:12.452 } 00:37:12.452 ] 00:37:12.452 }' 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.452 14:06:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.710 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.710 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.710 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.710 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.967 [2024-10-09 14:06:19.292679] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:12.967 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.967 "name": "Existed_Raid", 00:37:12.968 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:12.968 "strip_size_kb": 64, 00:37:12.968 "state": "configuring", 00:37:12.968 "raid_level": "raid5f", 00:37:12.968 "superblock": true, 00:37:12.968 "num_base_bdevs": 4, 00:37:12.968 "num_base_bdevs_discovered": 2, 00:37:12.968 "num_base_bdevs_operational": 4, 00:37:12.968 "base_bdevs_list": [ 00:37:12.968 { 00:37:12.968 "name": null, 00:37:12.968 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:12.968 "is_configured": false, 00:37:12.968 "data_offset": 0, 00:37:12.968 "data_size": 63488 00:37:12.968 }, 00:37:12.968 { 00:37:12.968 "name": null, 00:37:12.968 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:12.968 "is_configured": false, 00:37:12.968 "data_offset": 0, 00:37:12.968 "data_size": 63488 00:37:12.968 }, 00:37:12.968 { 00:37:12.968 "name": "BaseBdev3", 00:37:12.968 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:12.968 "is_configured": true, 00:37:12.968 "data_offset": 2048, 00:37:12.968 "data_size": 63488 00:37:12.968 }, 00:37:12.968 { 00:37:12.968 "name": "BaseBdev4", 00:37:12.968 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:12.968 "is_configured": true, 00:37:12.968 "data_offset": 2048, 00:37:12.968 "data_size": 63488 00:37:12.968 } 00:37:12.968 ] 00:37:12.968 }' 00:37:12.968 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.968 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.226 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.226 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:37:13.226 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.226 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.226 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.485 [2024-10-09 14:06:19.791186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:13.485 "name": "Existed_Raid", 00:37:13.485 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:13.485 "strip_size_kb": 64, 00:37:13.485 "state": "configuring", 00:37:13.485 "raid_level": "raid5f", 00:37:13.485 "superblock": true, 00:37:13.485 "num_base_bdevs": 4, 00:37:13.485 "num_base_bdevs_discovered": 3, 00:37:13.485 "num_base_bdevs_operational": 4, 00:37:13.485 "base_bdevs_list": [ 00:37:13.485 { 00:37:13.485 "name": null, 00:37:13.485 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:13.485 "is_configured": false, 00:37:13.485 "data_offset": 0, 00:37:13.485 "data_size": 63488 00:37:13.485 }, 00:37:13.485 { 00:37:13.485 "name": "BaseBdev2", 00:37:13.485 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:13.485 "is_configured": true, 00:37:13.485 "data_offset": 2048, 00:37:13.485 "data_size": 63488 00:37:13.485 }, 00:37:13.485 { 00:37:13.485 "name": "BaseBdev3", 00:37:13.485 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:13.485 "is_configured": true, 00:37:13.485 "data_offset": 2048, 00:37:13.485 "data_size": 63488 00:37:13.485 }, 00:37:13.485 { 00:37:13.485 "name": "BaseBdev4", 00:37:13.485 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:13.485 "is_configured": true, 00:37:13.485 "data_offset": 2048, 00:37:13.485 "data_size": 63488 00:37:13.485 } 00:37:13.485 ] 00:37:13.485 }' 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:13.485 14:06:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.744 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.744 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.744 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:13.744 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:37:13.744 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 37fa3b47-12a8-4de6-83ba-393dfa70b5a8 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.003 [2024-10-09 14:06:20.358243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:37:14.003 [2024-10-09 14:06:20.358423] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:37:14.003 [2024-10-09 14:06:20.358437] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:14.003 [2024-10-09 14:06:20.358710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:14.003 NewBaseBdev 00:37:14.003 [2024-10-09 14:06:20.359151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:37:14.003 [2024-10-09 14:06:20.359172] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:37:14.003 [2024-10-09 14:06:20.359271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.003 [ 00:37:14.003 { 00:37:14.003 "name": "NewBaseBdev", 00:37:14.003 "aliases": [ 00:37:14.003 "37fa3b47-12a8-4de6-83ba-393dfa70b5a8" 00:37:14.003 ], 00:37:14.003 "product_name": "Malloc disk", 00:37:14.003 "block_size": 512, 00:37:14.003 "num_blocks": 65536, 00:37:14.003 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:14.003 "assigned_rate_limits": { 00:37:14.003 "rw_ios_per_sec": 0, 00:37:14.003 "rw_mbytes_per_sec": 0, 00:37:14.003 "r_mbytes_per_sec": 0, 00:37:14.003 "w_mbytes_per_sec": 0 00:37:14.003 }, 00:37:14.003 "claimed": true, 00:37:14.003 "claim_type": "exclusive_write", 00:37:14.003 "zoned": false, 00:37:14.003 "supported_io_types": { 00:37:14.003 "read": true, 00:37:14.003 "write": true, 00:37:14.003 "unmap": true, 00:37:14.003 "flush": true, 00:37:14.003 "reset": true, 00:37:14.003 "nvme_admin": false, 00:37:14.003 "nvme_io": false, 00:37:14.003 "nvme_io_md": false, 00:37:14.003 "write_zeroes": true, 00:37:14.003 "zcopy": true, 00:37:14.003 "get_zone_info": false, 00:37:14.003 "zone_management": false, 00:37:14.003 "zone_append": false, 00:37:14.003 "compare": false, 00:37:14.003 "compare_and_write": false, 00:37:14.003 "abort": true, 00:37:14.003 "seek_hole": false, 00:37:14.003 "seek_data": false, 00:37:14.003 "copy": true, 00:37:14.003 "nvme_iov_md": false 00:37:14.003 }, 00:37:14.003 "memory_domains": [ 00:37:14.003 { 00:37:14.003 "dma_device_id": "system", 00:37:14.003 "dma_device_type": 1 00:37:14.003 }, 00:37:14.003 { 00:37:14.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:14.003 "dma_device_type": 2 00:37:14.003 } 00:37:14.003 ], 00:37:14.003 "driver_specific": {} 00:37:14.003 } 00:37:14.003 ] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.003 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:14.003 "name": "Existed_Raid", 00:37:14.003 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:14.003 "strip_size_kb": 64, 00:37:14.003 "state": "online", 00:37:14.003 "raid_level": "raid5f", 00:37:14.003 "superblock": true, 00:37:14.003 "num_base_bdevs": 4, 00:37:14.003 "num_base_bdevs_discovered": 4, 00:37:14.003 "num_base_bdevs_operational": 4, 00:37:14.003 "base_bdevs_list": [ 00:37:14.003 { 00:37:14.003 "name": "NewBaseBdev", 00:37:14.003 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:14.003 "is_configured": true, 00:37:14.003 "data_offset": 2048, 00:37:14.003 "data_size": 63488 00:37:14.003 }, 00:37:14.003 { 00:37:14.003 "name": "BaseBdev2", 00:37:14.003 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:14.003 "is_configured": true, 00:37:14.004 "data_offset": 2048, 00:37:14.004 "data_size": 63488 00:37:14.004 }, 00:37:14.004 { 00:37:14.004 "name": "BaseBdev3", 00:37:14.004 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:14.004 "is_configured": true, 00:37:14.004 "data_offset": 2048, 00:37:14.004 "data_size": 63488 00:37:14.004 }, 00:37:14.004 { 00:37:14.004 "name": "BaseBdev4", 00:37:14.004 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:14.004 "is_configured": true, 00:37:14.004 "data_offset": 2048, 00:37:14.004 "data_size": 63488 00:37:14.004 } 00:37:14.004 ] 00:37:14.004 }' 00:37:14.004 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:14.004 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 [2024-10-09 14:06:20.846614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:14.571 "name": "Existed_Raid", 00:37:14.571 "aliases": [ 00:37:14.571 "8db02d2f-2659-4933-93f7-5a917749f140" 00:37:14.571 ], 00:37:14.571 "product_name": "Raid Volume", 00:37:14.571 "block_size": 512, 00:37:14.571 "num_blocks": 190464, 00:37:14.571 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:14.571 "assigned_rate_limits": { 00:37:14.571 "rw_ios_per_sec": 0, 00:37:14.571 "rw_mbytes_per_sec": 0, 00:37:14.571 "r_mbytes_per_sec": 0, 00:37:14.571 "w_mbytes_per_sec": 0 00:37:14.571 }, 00:37:14.571 "claimed": false, 00:37:14.571 "zoned": false, 00:37:14.571 "supported_io_types": { 00:37:14.571 "read": true, 00:37:14.571 "write": true, 00:37:14.571 "unmap": false, 00:37:14.571 "flush": false, 00:37:14.571 "reset": true, 00:37:14.571 "nvme_admin": false, 00:37:14.571 "nvme_io": false, 00:37:14.571 "nvme_io_md": false, 00:37:14.571 "write_zeroes": true, 00:37:14.571 "zcopy": false, 00:37:14.571 "get_zone_info": false, 00:37:14.571 "zone_management": false, 00:37:14.571 "zone_append": false, 00:37:14.571 "compare": false, 00:37:14.571 "compare_and_write": false, 00:37:14.571 "abort": false, 00:37:14.571 "seek_hole": false, 00:37:14.571 "seek_data": false, 00:37:14.571 "copy": false, 00:37:14.571 "nvme_iov_md": false 00:37:14.571 }, 00:37:14.571 "driver_specific": { 00:37:14.571 "raid": { 00:37:14.571 "uuid": "8db02d2f-2659-4933-93f7-5a917749f140", 00:37:14.571 "strip_size_kb": 64, 00:37:14.571 "state": "online", 00:37:14.571 "raid_level": "raid5f", 00:37:14.571 "superblock": true, 00:37:14.571 "num_base_bdevs": 4, 00:37:14.571 "num_base_bdevs_discovered": 4, 00:37:14.571 "num_base_bdevs_operational": 4, 00:37:14.571 "base_bdevs_list": [ 00:37:14.571 { 00:37:14.571 "name": "NewBaseBdev", 00:37:14.571 "uuid": "37fa3b47-12a8-4de6-83ba-393dfa70b5a8", 00:37:14.571 "is_configured": true, 00:37:14.571 "data_offset": 2048, 00:37:14.571 "data_size": 63488 00:37:14.571 }, 00:37:14.571 { 00:37:14.571 "name": "BaseBdev2", 00:37:14.571 "uuid": "2bcf9331-1e21-479f-9893-e977124b73b8", 00:37:14.571 "is_configured": true, 00:37:14.571 "data_offset": 2048, 00:37:14.571 "data_size": 63488 00:37:14.571 }, 00:37:14.571 { 00:37:14.571 "name": "BaseBdev3", 00:37:14.571 "uuid": "9f83e2b6-aaf0-4c7e-9c4c-1e4ddce061e7", 00:37:14.571 "is_configured": true, 00:37:14.571 "data_offset": 2048, 00:37:14.571 "data_size": 63488 00:37:14.571 }, 00:37:14.571 { 00:37:14.571 "name": "BaseBdev4", 00:37:14.571 "uuid": "01d1b67d-ae1a-4d7c-a318-7a20db6f5676", 00:37:14.571 "is_configured": true, 00:37:14.571 "data_offset": 2048, 00:37:14.571 "data_size": 63488 00:37:14.571 } 00:37:14.571 ] 00:37:14.571 } 00:37:14.571 } 00:37:14.571 }' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:37:14.571 BaseBdev2 00:37:14.571 BaseBdev3 00:37:14.571 BaseBdev4' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.571 14:06:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.571 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:14.831 [2024-10-09 14:06:21.162443] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:14.831 [2024-10-09 14:06:21.162476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:14.831 [2024-10-09 14:06:21.162562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:14.831 [2024-10-09 14:06:21.162816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:14.831 [2024-10-09 14:06:21.162843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94323 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94323 ']' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94323 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94323 00:37:14.831 killing process with pid 94323 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94323' 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94323 00:37:14.831 [2024-10-09 14:06:21.211659] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:14.831 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94323 00:37:14.831 [2024-10-09 14:06:21.252780] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:15.090 14:06:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:37:15.090 00:37:15.090 real 0m9.912s 00:37:15.090 user 0m17.147s 00:37:15.090 sys 0m2.133s 00:37:15.090 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:15.090 14:06:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:15.090 ************************************ 00:37:15.090 END TEST raid5f_state_function_test_sb 00:37:15.090 ************************************ 00:37:15.090 14:06:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:37:15.090 14:06:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:15.090 14:06:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:15.090 14:06:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:15.090 ************************************ 00:37:15.090 START TEST raid5f_superblock_test 00:37:15.090 ************************************ 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94977 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94977 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94977 ']' 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:15.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:15.090 14:06:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:15.349 [2024-10-09 14:06:21.682255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:15.349 [2024-10-09 14:06:21.682452] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94977 ] 00:37:15.349 [2024-10-09 14:06:21.856767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.608 [2024-10-09 14:06:21.901569] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.608 [2024-10-09 14:06:21.944993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:15.608 [2024-10-09 14:06:21.945041] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 malloc1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 [2024-10-09 14:06:22.557016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:16.176 [2024-10-09 14:06:22.557090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:16.176 [2024-10-09 14:06:22.557112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:16.176 [2024-10-09 14:06:22.557130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:16.176 [2024-10-09 14:06:22.559686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:16.176 [2024-10-09 14:06:22.559739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:16.176 pt1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 malloc2 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 [2024-10-09 14:06:22.591437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:16.176 [2024-10-09 14:06:22.591508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:16.176 [2024-10-09 14:06:22.591534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:16.176 [2024-10-09 14:06:22.591569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:16.176 [2024-10-09 14:06:22.594976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:16.176 [2024-10-09 14:06:22.595028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:16.176 pt2 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 malloc3 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 [2024-10-09 14:06:22.616579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:16.176 [2024-10-09 14:06:22.616631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:16.176 [2024-10-09 14:06:22.616652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:16.176 [2024-10-09 14:06:22.616667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:16.176 [2024-10-09 14:06:22.619093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:16.176 [2024-10-09 14:06:22.619132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:16.176 pt3 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 malloc4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 [2024-10-09 14:06:22.641503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:16.176 [2024-10-09 14:06:22.641567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:16.176 [2024-10-09 14:06:22.641585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:16.176 [2024-10-09 14:06:22.641602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:16.176 [2024-10-09 14:06:22.644005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:16.176 [2024-10-09 14:06:22.644044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:16.176 pt4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 [2024-10-09 14:06:22.649594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:16.176 [2024-10-09 14:06:22.651767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:16.176 [2024-10-09 14:06:22.651828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:16.176 [2024-10-09 14:06:22.651892] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:16.176 [2024-10-09 14:06:22.652049] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:37:16.176 [2024-10-09 14:06:22.652070] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:16.176 [2024-10-09 14:06:22.652327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:16.176 [2024-10-09 14:06:22.652793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:37:16.176 [2024-10-09 14:06:22.652813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:37:16.176 [2024-10-09 14:06:22.652929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.176 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:16.176 "name": "raid_bdev1", 00:37:16.176 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:16.176 "strip_size_kb": 64, 00:37:16.176 "state": "online", 00:37:16.176 "raid_level": "raid5f", 00:37:16.176 "superblock": true, 00:37:16.176 "num_base_bdevs": 4, 00:37:16.176 "num_base_bdevs_discovered": 4, 00:37:16.176 "num_base_bdevs_operational": 4, 00:37:16.176 "base_bdevs_list": [ 00:37:16.176 { 00:37:16.176 "name": "pt1", 00:37:16.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:16.176 "is_configured": true, 00:37:16.176 "data_offset": 2048, 00:37:16.177 "data_size": 63488 00:37:16.177 }, 00:37:16.177 { 00:37:16.177 "name": "pt2", 00:37:16.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:16.177 "is_configured": true, 00:37:16.177 "data_offset": 2048, 00:37:16.177 "data_size": 63488 00:37:16.177 }, 00:37:16.177 { 00:37:16.177 "name": "pt3", 00:37:16.177 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:16.177 "is_configured": true, 00:37:16.177 "data_offset": 2048, 00:37:16.177 "data_size": 63488 00:37:16.177 }, 00:37:16.177 { 00:37:16.177 "name": "pt4", 00:37:16.177 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:16.177 "is_configured": true, 00:37:16.177 "data_offset": 2048, 00:37:16.177 "data_size": 63488 00:37:16.177 } 00:37:16.177 ] 00:37:16.177 }' 00:37:16.177 14:06:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:16.177 14:06:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:16.790 [2024-10-09 14:06:23.095088] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.790 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:16.790 "name": "raid_bdev1", 00:37:16.790 "aliases": [ 00:37:16.790 "240d1926-d0fa-4e78-9c01-f3cac7288774" 00:37:16.790 ], 00:37:16.790 "product_name": "Raid Volume", 00:37:16.790 "block_size": 512, 00:37:16.790 "num_blocks": 190464, 00:37:16.790 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:16.790 "assigned_rate_limits": { 00:37:16.790 "rw_ios_per_sec": 0, 00:37:16.790 "rw_mbytes_per_sec": 0, 00:37:16.790 "r_mbytes_per_sec": 0, 00:37:16.790 "w_mbytes_per_sec": 0 00:37:16.790 }, 00:37:16.790 "claimed": false, 00:37:16.790 "zoned": false, 00:37:16.790 "supported_io_types": { 00:37:16.790 "read": true, 00:37:16.790 "write": true, 00:37:16.790 "unmap": false, 00:37:16.790 "flush": false, 00:37:16.790 "reset": true, 00:37:16.790 "nvme_admin": false, 00:37:16.790 "nvme_io": false, 00:37:16.790 "nvme_io_md": false, 00:37:16.790 "write_zeroes": true, 00:37:16.790 "zcopy": false, 00:37:16.790 "get_zone_info": false, 00:37:16.790 "zone_management": false, 00:37:16.790 "zone_append": false, 00:37:16.790 "compare": false, 00:37:16.790 "compare_and_write": false, 00:37:16.790 "abort": false, 00:37:16.790 "seek_hole": false, 00:37:16.790 "seek_data": false, 00:37:16.790 "copy": false, 00:37:16.790 "nvme_iov_md": false 00:37:16.790 }, 00:37:16.790 "driver_specific": { 00:37:16.790 "raid": { 00:37:16.790 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:16.790 "strip_size_kb": 64, 00:37:16.790 "state": "online", 00:37:16.790 "raid_level": "raid5f", 00:37:16.790 "superblock": true, 00:37:16.790 "num_base_bdevs": 4, 00:37:16.790 "num_base_bdevs_discovered": 4, 00:37:16.790 "num_base_bdevs_operational": 4, 00:37:16.790 "base_bdevs_list": [ 00:37:16.790 { 00:37:16.790 "name": "pt1", 00:37:16.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:16.790 "is_configured": true, 00:37:16.790 "data_offset": 2048, 00:37:16.790 "data_size": 63488 00:37:16.791 }, 00:37:16.791 { 00:37:16.791 "name": "pt2", 00:37:16.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:16.791 "is_configured": true, 00:37:16.791 "data_offset": 2048, 00:37:16.791 "data_size": 63488 00:37:16.791 }, 00:37:16.791 { 00:37:16.791 "name": "pt3", 00:37:16.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:16.791 "is_configured": true, 00:37:16.791 "data_offset": 2048, 00:37:16.791 "data_size": 63488 00:37:16.791 }, 00:37:16.791 { 00:37:16.791 "name": "pt4", 00:37:16.791 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:16.791 "is_configured": true, 00:37:16.791 "data_offset": 2048, 00:37:16.791 "data_size": 63488 00:37:16.791 } 00:37:16.791 ] 00:37:16.791 } 00:37:16.791 } 00:37:16.791 }' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:16.791 pt2 00:37:16.791 pt3 00:37:16.791 pt4' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.791 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 [2024-10-09 14:06:23.415124] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=240d1926-d0fa-4e78-9c01-f3cac7288774 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 240d1926-d0fa-4e78-9c01-f3cac7288774 ']' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 [2024-10-09 14:06:23.454954] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:17.050 [2024-10-09 14:06:23.455077] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:17.050 [2024-10-09 14:06:23.455216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:17.050 [2024-10-09 14:06:23.455340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:17.050 [2024-10-09 14:06:23.455441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:17.050 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.051 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.310 [2024-10-09 14:06:23.619038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:17.310 [2024-10-09 14:06:23.621340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:17.310 [2024-10-09 14:06:23.621495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:37:17.310 [2024-10-09 14:06:23.621535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:37:17.310 [2024-10-09 14:06:23.621601] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:17.310 [2024-10-09 14:06:23.621655] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:17.310 [2024-10-09 14:06:23.621679] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:37:17.310 [2024-10-09 14:06:23.621698] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:37:17.310 [2024-10-09 14:06:23.621717] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:17.310 [2024-10-09 14:06:23.621730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:37:17.310 request: 00:37:17.310 { 00:37:17.310 "name": "raid_bdev1", 00:37:17.310 "raid_level": "raid5f", 00:37:17.310 "base_bdevs": [ 00:37:17.310 "malloc1", 00:37:17.310 "malloc2", 00:37:17.310 "malloc3", 00:37:17.310 "malloc4" 00:37:17.310 ], 00:37:17.310 "strip_size_kb": 64, 00:37:17.310 "superblock": false, 00:37:17.310 "method": "bdev_raid_create", 00:37:17.310 "req_id": 1 00:37:17.310 } 00:37:17.310 Got JSON-RPC error response 00:37:17.310 response: 00:37:17.310 { 00:37:17.310 "code": -17, 00:37:17.310 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:17.310 } 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.310 [2024-10-09 14:06:23.678998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:17.310 [2024-10-09 14:06:23.679144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.310 [2024-10-09 14:06:23.679202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:17.310 [2024-10-09 14:06:23.679277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.310 [2024-10-09 14:06:23.681782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.310 [2024-10-09 14:06:23.681909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:17.310 [2024-10-09 14:06:23.682070] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:17.310 [2024-10-09 14:06:23.682150] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:17.310 pt1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.310 "name": "raid_bdev1", 00:37:17.310 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:17.310 "strip_size_kb": 64, 00:37:17.310 "state": "configuring", 00:37:17.310 "raid_level": "raid5f", 00:37:17.310 "superblock": true, 00:37:17.310 "num_base_bdevs": 4, 00:37:17.310 "num_base_bdevs_discovered": 1, 00:37:17.310 "num_base_bdevs_operational": 4, 00:37:17.310 "base_bdevs_list": [ 00:37:17.310 { 00:37:17.310 "name": "pt1", 00:37:17.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.310 "is_configured": true, 00:37:17.310 "data_offset": 2048, 00:37:17.310 "data_size": 63488 00:37:17.310 }, 00:37:17.310 { 00:37:17.310 "name": null, 00:37:17.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:17.310 "is_configured": false, 00:37:17.310 "data_offset": 2048, 00:37:17.310 "data_size": 63488 00:37:17.310 }, 00:37:17.310 { 00:37:17.310 "name": null, 00:37:17.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:17.310 "is_configured": false, 00:37:17.310 "data_offset": 2048, 00:37:17.310 "data_size": 63488 00:37:17.310 }, 00:37:17.310 { 00:37:17.310 "name": null, 00:37:17.310 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:17.310 "is_configured": false, 00:37:17.310 "data_offset": 2048, 00:37:17.310 "data_size": 63488 00:37:17.310 } 00:37:17.310 ] 00:37:17.310 }' 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.310 14:06:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.878 [2024-10-09 14:06:24.135133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:17.878 [2024-10-09 14:06:24.135298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:17.878 [2024-10-09 14:06:24.135394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:17.878 [2024-10-09 14:06:24.135472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:17.878 [2024-10-09 14:06:24.135885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:17.878 [2024-10-09 14:06:24.135905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:17.878 [2024-10-09 14:06:24.135973] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:17.878 [2024-10-09 14:06:24.135994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:17.878 pt2 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.878 [2024-10-09 14:06:24.143141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:17.878 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.879 "name": "raid_bdev1", 00:37:17.879 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:17.879 "strip_size_kb": 64, 00:37:17.879 "state": "configuring", 00:37:17.879 "raid_level": "raid5f", 00:37:17.879 "superblock": true, 00:37:17.879 "num_base_bdevs": 4, 00:37:17.879 "num_base_bdevs_discovered": 1, 00:37:17.879 "num_base_bdevs_operational": 4, 00:37:17.879 "base_bdevs_list": [ 00:37:17.879 { 00:37:17.879 "name": "pt1", 00:37:17.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:17.879 "is_configured": true, 00:37:17.879 "data_offset": 2048, 00:37:17.879 "data_size": 63488 00:37:17.879 }, 00:37:17.879 { 00:37:17.879 "name": null, 00:37:17.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:17.879 "is_configured": false, 00:37:17.879 "data_offset": 0, 00:37:17.879 "data_size": 63488 00:37:17.879 }, 00:37:17.879 { 00:37:17.879 "name": null, 00:37:17.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:17.879 "is_configured": false, 00:37:17.879 "data_offset": 2048, 00:37:17.879 "data_size": 63488 00:37:17.879 }, 00:37:17.879 { 00:37:17.879 "name": null, 00:37:17.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:17.879 "is_configured": false, 00:37:17.879 "data_offset": 2048, 00:37:17.879 "data_size": 63488 00:37:17.879 } 00:37:17.879 ] 00:37:17.879 }' 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.879 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.141 [2024-10-09 14:06:24.607245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:18.141 [2024-10-09 14:06:24.607416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.141 [2024-10-09 14:06:24.607468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:18.141 [2024-10-09 14:06:24.607590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.141 [2024-10-09 14:06:24.608017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.141 [2024-10-09 14:06:24.608187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:18.141 [2024-10-09 14:06:24.608271] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:18.141 [2024-10-09 14:06:24.608300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:18.141 pt2 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.141 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.141 [2024-10-09 14:06:24.619212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:18.141 [2024-10-09 14:06:24.619267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.141 [2024-10-09 14:06:24.619286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:18.141 [2024-10-09 14:06:24.619299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.141 [2024-10-09 14:06:24.619650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.141 [2024-10-09 14:06:24.619671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:18.142 [2024-10-09 14:06:24.619726] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:18.142 [2024-10-09 14:06:24.619747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:18.142 pt3 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.142 [2024-10-09 14:06:24.627219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:18.142 [2024-10-09 14:06:24.627387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:18.142 [2024-10-09 14:06:24.627436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:18.142 [2024-10-09 14:06:24.627526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:18.142 [2024-10-09 14:06:24.627883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:18.142 [2024-10-09 14:06:24.627988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:18.142 [2024-10-09 14:06:24.628053] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:18.142 [2024-10-09 14:06:24.628077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:18.142 [2024-10-09 14:06:24.628186] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:37:18.142 [2024-10-09 14:06:24.628202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:18.142 [2024-10-09 14:06:24.628446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:18.142 [2024-10-09 14:06:24.628929] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:37:18.142 [2024-10-09 14:06:24.628982] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:37:18.142 [2024-10-09 14:06:24.629086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:18.142 pt4 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:18.142 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.143 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.143 "name": "raid_bdev1", 00:37:18.143 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:18.143 "strip_size_kb": 64, 00:37:18.143 "state": "online", 00:37:18.143 "raid_level": "raid5f", 00:37:18.143 "superblock": true, 00:37:18.143 "num_base_bdevs": 4, 00:37:18.143 "num_base_bdevs_discovered": 4, 00:37:18.143 "num_base_bdevs_operational": 4, 00:37:18.143 "base_bdevs_list": [ 00:37:18.143 { 00:37:18.143 "name": "pt1", 00:37:18.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:18.143 "is_configured": true, 00:37:18.143 "data_offset": 2048, 00:37:18.143 "data_size": 63488 00:37:18.143 }, 00:37:18.143 { 00:37:18.143 "name": "pt2", 00:37:18.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:18.143 "is_configured": true, 00:37:18.143 "data_offset": 2048, 00:37:18.143 "data_size": 63488 00:37:18.143 }, 00:37:18.143 { 00:37:18.143 "name": "pt3", 00:37:18.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:18.143 "is_configured": true, 00:37:18.143 "data_offset": 2048, 00:37:18.143 "data_size": 63488 00:37:18.143 }, 00:37:18.144 { 00:37:18.144 "name": "pt4", 00:37:18.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:18.144 "is_configured": true, 00:37:18.144 "data_offset": 2048, 00:37:18.144 "data_size": 63488 00:37:18.144 } 00:37:18.144 ] 00:37:18.144 }' 00:37:18.144 14:06:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.144 14:06:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.715 [2024-10-09 14:06:25.115545] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:18.715 "name": "raid_bdev1", 00:37:18.715 "aliases": [ 00:37:18.715 "240d1926-d0fa-4e78-9c01-f3cac7288774" 00:37:18.715 ], 00:37:18.715 "product_name": "Raid Volume", 00:37:18.715 "block_size": 512, 00:37:18.715 "num_blocks": 190464, 00:37:18.715 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:18.715 "assigned_rate_limits": { 00:37:18.715 "rw_ios_per_sec": 0, 00:37:18.715 "rw_mbytes_per_sec": 0, 00:37:18.715 "r_mbytes_per_sec": 0, 00:37:18.715 "w_mbytes_per_sec": 0 00:37:18.715 }, 00:37:18.715 "claimed": false, 00:37:18.715 "zoned": false, 00:37:18.715 "supported_io_types": { 00:37:18.715 "read": true, 00:37:18.715 "write": true, 00:37:18.715 "unmap": false, 00:37:18.715 "flush": false, 00:37:18.715 "reset": true, 00:37:18.715 "nvme_admin": false, 00:37:18.715 "nvme_io": false, 00:37:18.715 "nvme_io_md": false, 00:37:18.715 "write_zeroes": true, 00:37:18.715 "zcopy": false, 00:37:18.715 "get_zone_info": false, 00:37:18.715 "zone_management": false, 00:37:18.715 "zone_append": false, 00:37:18.715 "compare": false, 00:37:18.715 "compare_and_write": false, 00:37:18.715 "abort": false, 00:37:18.715 "seek_hole": false, 00:37:18.715 "seek_data": false, 00:37:18.715 "copy": false, 00:37:18.715 "nvme_iov_md": false 00:37:18.715 }, 00:37:18.715 "driver_specific": { 00:37:18.715 "raid": { 00:37:18.715 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:18.715 "strip_size_kb": 64, 00:37:18.715 "state": "online", 00:37:18.715 "raid_level": "raid5f", 00:37:18.715 "superblock": true, 00:37:18.715 "num_base_bdevs": 4, 00:37:18.715 "num_base_bdevs_discovered": 4, 00:37:18.715 "num_base_bdevs_operational": 4, 00:37:18.715 "base_bdevs_list": [ 00:37:18.715 { 00:37:18.715 "name": "pt1", 00:37:18.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:18.715 "is_configured": true, 00:37:18.715 "data_offset": 2048, 00:37:18.715 "data_size": 63488 00:37:18.715 }, 00:37:18.715 { 00:37:18.715 "name": "pt2", 00:37:18.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:18.715 "is_configured": true, 00:37:18.715 "data_offset": 2048, 00:37:18.715 "data_size": 63488 00:37:18.715 }, 00:37:18.715 { 00:37:18.715 "name": "pt3", 00:37:18.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:18.715 "is_configured": true, 00:37:18.715 "data_offset": 2048, 00:37:18.715 "data_size": 63488 00:37:18.715 }, 00:37:18.715 { 00:37:18.715 "name": "pt4", 00:37:18.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:18.715 "is_configured": true, 00:37:18.715 "data_offset": 2048, 00:37:18.715 "data_size": 63488 00:37:18.715 } 00:37:18.715 ] 00:37:18.715 } 00:37:18.715 } 00:37:18.715 }' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:18.715 pt2 00:37:18.715 pt3 00:37:18.715 pt4' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.715 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:18.974 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.975 [2024-10-09 14:06:25.439605] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 240d1926-d0fa-4e78-9c01-f3cac7288774 '!=' 240d1926-d0fa-4e78-9c01-f3cac7288774 ']' 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.975 [2024-10-09 14:06:25.487465] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.975 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.233 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.233 "name": "raid_bdev1", 00:37:19.234 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:19.234 "strip_size_kb": 64, 00:37:19.234 "state": "online", 00:37:19.234 "raid_level": "raid5f", 00:37:19.234 "superblock": true, 00:37:19.234 "num_base_bdevs": 4, 00:37:19.234 "num_base_bdevs_discovered": 3, 00:37:19.234 "num_base_bdevs_operational": 3, 00:37:19.234 "base_bdevs_list": [ 00:37:19.234 { 00:37:19.234 "name": null, 00:37:19.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.234 "is_configured": false, 00:37:19.234 "data_offset": 0, 00:37:19.234 "data_size": 63488 00:37:19.234 }, 00:37:19.234 { 00:37:19.234 "name": "pt2", 00:37:19.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.234 "is_configured": true, 00:37:19.234 "data_offset": 2048, 00:37:19.234 "data_size": 63488 00:37:19.234 }, 00:37:19.234 { 00:37:19.234 "name": "pt3", 00:37:19.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:19.234 "is_configured": true, 00:37:19.234 "data_offset": 2048, 00:37:19.234 "data_size": 63488 00:37:19.234 }, 00:37:19.234 { 00:37:19.234 "name": "pt4", 00:37:19.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:19.234 "is_configured": true, 00:37:19.234 "data_offset": 2048, 00:37:19.234 "data_size": 63488 00:37:19.234 } 00:37:19.234 ] 00:37:19.234 }' 00:37:19.234 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.234 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.492 [2024-10-09 14:06:25.943513] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:19.492 [2024-10-09 14:06:25.943668] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:19.492 [2024-10-09 14:06:25.943769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:19.492 [2024-10-09 14:06:25.943842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:19.492 [2024-10-09 14:06:25.943857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.492 14:06:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.492 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.492 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:19.492 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:19.492 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:37:19.492 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.493 [2024-10-09 14:06:26.027510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:19.493 [2024-10-09 14:06:26.027598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:19.493 [2024-10-09 14:06:26.027619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:19.493 [2024-10-09 14:06:26.027632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:19.493 [2024-10-09 14:06:26.030100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:19.493 pt2 00:37:19.493 [2024-10-09 14:06:26.030245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:19.493 [2024-10-09 14:06:26.030332] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:19.493 [2024-10-09 14:06:26.030372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:19.493 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:19.750 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:19.751 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.751 "name": "raid_bdev1", 00:37:19.751 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:19.751 "strip_size_kb": 64, 00:37:19.751 "state": "configuring", 00:37:19.751 "raid_level": "raid5f", 00:37:19.751 "superblock": true, 00:37:19.751 "num_base_bdevs": 4, 00:37:19.751 "num_base_bdevs_discovered": 1, 00:37:19.751 "num_base_bdevs_operational": 3, 00:37:19.751 "base_bdevs_list": [ 00:37:19.751 { 00:37:19.751 "name": null, 00:37:19.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.751 "is_configured": false, 00:37:19.751 "data_offset": 2048, 00:37:19.751 "data_size": 63488 00:37:19.751 }, 00:37:19.751 { 00:37:19.751 "name": "pt2", 00:37:19.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:19.751 "is_configured": true, 00:37:19.751 "data_offset": 2048, 00:37:19.751 "data_size": 63488 00:37:19.751 }, 00:37:19.751 { 00:37:19.751 "name": null, 00:37:19.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:19.751 "is_configured": false, 00:37:19.751 "data_offset": 2048, 00:37:19.751 "data_size": 63488 00:37:19.751 }, 00:37:19.751 { 00:37:19.751 "name": null, 00:37:19.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:19.751 "is_configured": false, 00:37:19.751 "data_offset": 2048, 00:37:19.751 "data_size": 63488 00:37:19.751 } 00:37:19.751 ] 00:37:19.751 }' 00:37:19.751 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.751 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.009 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:20.009 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:20.009 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:37:20.009 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.009 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.009 [2024-10-09 14:06:26.467659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:37:20.009 [2024-10-09 14:06:26.467835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.009 [2024-10-09 14:06:26.467890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:20.009 [2024-10-09 14:06:26.467997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.009 [2024-10-09 14:06:26.468457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.009 [2024-10-09 14:06:26.468606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:37:20.009 [2024-10-09 14:06:26.468702] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:37:20.009 [2024-10-09 14:06:26.468741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:20.009 pt3 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.010 "name": "raid_bdev1", 00:37:20.010 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:20.010 "strip_size_kb": 64, 00:37:20.010 "state": "configuring", 00:37:20.010 "raid_level": "raid5f", 00:37:20.010 "superblock": true, 00:37:20.010 "num_base_bdevs": 4, 00:37:20.010 "num_base_bdevs_discovered": 2, 00:37:20.010 "num_base_bdevs_operational": 3, 00:37:20.010 "base_bdevs_list": [ 00:37:20.010 { 00:37:20.010 "name": null, 00:37:20.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.010 "is_configured": false, 00:37:20.010 "data_offset": 2048, 00:37:20.010 "data_size": 63488 00:37:20.010 }, 00:37:20.010 { 00:37:20.010 "name": "pt2", 00:37:20.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.010 "is_configured": true, 00:37:20.010 "data_offset": 2048, 00:37:20.010 "data_size": 63488 00:37:20.010 }, 00:37:20.010 { 00:37:20.010 "name": "pt3", 00:37:20.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:20.010 "is_configured": true, 00:37:20.010 "data_offset": 2048, 00:37:20.010 "data_size": 63488 00:37:20.010 }, 00:37:20.010 { 00:37:20.010 "name": null, 00:37:20.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:20.010 "is_configured": false, 00:37:20.010 "data_offset": 2048, 00:37:20.010 "data_size": 63488 00:37:20.010 } 00:37:20.010 ] 00:37:20.010 }' 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.010 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.578 [2024-10-09 14:06:26.935750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:20.578 [2024-10-09 14:06:26.935943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:20.578 [2024-10-09 14:06:26.936047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:37:20.578 [2024-10-09 14:06:26.936130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:20.578 [2024-10-09 14:06:26.936636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:20.578 [2024-10-09 14:06:26.936766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:20.578 [2024-10-09 14:06:26.936864] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:20.578 [2024-10-09 14:06:26.936898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:20.578 [2024-10-09 14:06:26.937004] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:37:20.578 [2024-10-09 14:06:26.937017] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:20.578 [2024-10-09 14:06:26.937265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:20.578 [2024-10-09 14:06:26.937835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:37:20.578 [2024-10-09 14:06:26.937850] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:37:20.578 [2024-10-09 14:06:26.938090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:20.578 pt4 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.578 "name": "raid_bdev1", 00:37:20.578 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:20.578 "strip_size_kb": 64, 00:37:20.578 "state": "online", 00:37:20.578 "raid_level": "raid5f", 00:37:20.578 "superblock": true, 00:37:20.578 "num_base_bdevs": 4, 00:37:20.578 "num_base_bdevs_discovered": 3, 00:37:20.578 "num_base_bdevs_operational": 3, 00:37:20.578 "base_bdevs_list": [ 00:37:20.578 { 00:37:20.578 "name": null, 00:37:20.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.578 "is_configured": false, 00:37:20.578 "data_offset": 2048, 00:37:20.578 "data_size": 63488 00:37:20.578 }, 00:37:20.578 { 00:37:20.578 "name": "pt2", 00:37:20.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:20.578 "is_configured": true, 00:37:20.578 "data_offset": 2048, 00:37:20.578 "data_size": 63488 00:37:20.578 }, 00:37:20.578 { 00:37:20.578 "name": "pt3", 00:37:20.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:20.578 "is_configured": true, 00:37:20.578 "data_offset": 2048, 00:37:20.578 "data_size": 63488 00:37:20.578 }, 00:37:20.578 { 00:37:20.578 "name": "pt4", 00:37:20.578 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:20.578 "is_configured": true, 00:37:20.578 "data_offset": 2048, 00:37:20.578 "data_size": 63488 00:37:20.578 } 00:37:20.578 ] 00:37:20.578 }' 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.578 14:06:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 [2024-10-09 14:06:27.399909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:21.146 [2024-10-09 14:06:27.400061] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:21.146 [2024-10-09 14:06:27.400177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:21.146 [2024-10-09 14:06:27.400264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:21.146 [2024-10-09 14:06:27.400279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 [2024-10-09 14:06:27.463929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:21.146 [2024-10-09 14:06:27.464116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.146 [2024-10-09 14:06:27.464152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:37:21.146 [2024-10-09 14:06:27.464165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.146 [2024-10-09 14:06:27.467112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.146 pt1 00:37:21.146 [2024-10-09 14:06:27.467270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:21.146 [2024-10-09 14:06:27.467368] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:21.146 [2024-10-09 14:06:27.467420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:21.146 [2024-10-09 14:06:27.467534] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:21.146 [2024-10-09 14:06:27.467576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:21.146 [2024-10-09 14:06:27.467600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:37:21.146 [2024-10-09 14:06:27.467656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:21.146 [2024-10-09 14:06:27.467794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.146 "name": "raid_bdev1", 00:37:21.146 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:21.146 "strip_size_kb": 64, 00:37:21.146 "state": "configuring", 00:37:21.146 "raid_level": "raid5f", 00:37:21.146 "superblock": true, 00:37:21.146 "num_base_bdevs": 4, 00:37:21.146 "num_base_bdevs_discovered": 2, 00:37:21.146 "num_base_bdevs_operational": 3, 00:37:21.146 "base_bdevs_list": [ 00:37:21.146 { 00:37:21.146 "name": null, 00:37:21.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.146 "is_configured": false, 00:37:21.146 "data_offset": 2048, 00:37:21.146 "data_size": 63488 00:37:21.146 }, 00:37:21.146 { 00:37:21.146 "name": "pt2", 00:37:21.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.146 "is_configured": true, 00:37:21.146 "data_offset": 2048, 00:37:21.146 "data_size": 63488 00:37:21.146 }, 00:37:21.146 { 00:37:21.146 "name": "pt3", 00:37:21.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:21.146 "is_configured": true, 00:37:21.146 "data_offset": 2048, 00:37:21.146 "data_size": 63488 00:37:21.146 }, 00:37:21.146 { 00:37:21.146 "name": null, 00:37:21.146 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:21.146 "is_configured": false, 00:37:21.146 "data_offset": 2048, 00:37:21.146 "data_size": 63488 00:37:21.146 } 00:37:21.146 ] 00:37:21.146 }' 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.146 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.404 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:37:21.404 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.404 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.404 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:21.404 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.663 [2024-10-09 14:06:27.980048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:37:21.663 [2024-10-09 14:06:27.980245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:21.663 [2024-10-09 14:06:27.980347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:37:21.663 [2024-10-09 14:06:27.980436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:21.663 [2024-10-09 14:06:27.981027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:21.663 [2024-10-09 14:06:27.981068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:37:21.663 [2024-10-09 14:06:27.981148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:37:21.663 [2024-10-09 14:06:27.981178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:37:21.663 [2024-10-09 14:06:27.981286] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:37:21.663 [2024-10-09 14:06:27.981303] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:21.663 [2024-10-09 14:06:27.981588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:21.663 [2024-10-09 14:06:27.982182] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:37:21.663 [2024-10-09 14:06:27.982319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:37:21.663 [2024-10-09 14:06:27.982557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:21.663 pt4 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:21.663 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.664 14:06:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.664 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:21.664 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:21.664 "name": "raid_bdev1", 00:37:21.664 "uuid": "240d1926-d0fa-4e78-9c01-f3cac7288774", 00:37:21.664 "strip_size_kb": 64, 00:37:21.664 "state": "online", 00:37:21.664 "raid_level": "raid5f", 00:37:21.664 "superblock": true, 00:37:21.664 "num_base_bdevs": 4, 00:37:21.664 "num_base_bdevs_discovered": 3, 00:37:21.664 "num_base_bdevs_operational": 3, 00:37:21.664 "base_bdevs_list": [ 00:37:21.664 { 00:37:21.664 "name": null, 00:37:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:21.664 "is_configured": false, 00:37:21.664 "data_offset": 2048, 00:37:21.664 "data_size": 63488 00:37:21.664 }, 00:37:21.664 { 00:37:21.664 "name": "pt2", 00:37:21.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:21.664 "is_configured": true, 00:37:21.664 "data_offset": 2048, 00:37:21.664 "data_size": 63488 00:37:21.664 }, 00:37:21.664 { 00:37:21.664 "name": "pt3", 00:37:21.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:37:21.664 "is_configured": true, 00:37:21.664 "data_offset": 2048, 00:37:21.664 "data_size": 63488 00:37:21.664 }, 00:37:21.664 { 00:37:21.664 "name": "pt4", 00:37:21.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:37:21.664 "is_configured": true, 00:37:21.664 "data_offset": 2048, 00:37:21.664 "data_size": 63488 00:37:21.664 } 00:37:21.664 ] 00:37:21.664 }' 00:37:21.664 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:21.664 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.922 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:37:21.922 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:21.922 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:21.922 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:21.922 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:37:22.180 [2024-10-09 14:06:28.480489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 240d1926-d0fa-4e78-9c01-f3cac7288774 '!=' 240d1926-d0fa-4e78-9c01-f3cac7288774 ']' 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94977 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94977 ']' 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94977 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94977 00:37:22.180 killing process with pid 94977 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94977' 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94977 00:37:22.180 [2024-10-09 14:06:28.558522] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:22.180 [2024-10-09 14:06:28.558615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:22.180 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94977 00:37:22.180 [2024-10-09 14:06:28.558699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:22.180 [2024-10-09 14:06:28.558712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:37:22.180 [2024-10-09 14:06:28.605590] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:22.439 ************************************ 00:37:22.439 END TEST raid5f_superblock_test 00:37:22.439 ************************************ 00:37:22.439 14:06:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:37:22.439 00:37:22.439 real 0m7.281s 00:37:22.439 user 0m12.379s 00:37:22.439 sys 0m1.623s 00:37:22.439 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:22.439 14:06:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.439 14:06:28 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:37:22.439 14:06:28 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:37:22.439 14:06:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:22.439 14:06:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:22.439 14:06:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:22.439 ************************************ 00:37:22.439 START TEST raid5f_rebuild_test 00:37:22.439 ************************************ 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95446 00:37:22.439 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95446 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95446 ']' 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.440 14:06:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:22.698 [2024-10-09 14:06:29.040361] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:22.698 [2024-10-09 14:06:29.040571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95446 ] 00:37:22.698 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:22.698 Zero copy mechanism will not be used. 00:37:22.698 [2024-10-09 14:06:29.220486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.956 [2024-10-09 14:06:29.266757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.956 [2024-10-09 14:06:29.310059] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:22.956 [2024-10-09 14:06:29.310089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.525 BaseBdev1_malloc 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.525 [2024-10-09 14:06:29.994074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:23.525 [2024-10-09 14:06:29.994146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.525 [2024-10-09 14:06:29.994176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:23.525 [2024-10-09 14:06:29.994195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.525 [2024-10-09 14:06:29.996668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.525 [2024-10-09 14:06:29.996705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:23.525 BaseBdev1 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.525 14:06:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.525 BaseBdev2_malloc 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.525 [2024-10-09 14:06:30.025898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:23.525 [2024-10-09 14:06:30.025973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.525 [2024-10-09 14:06:30.026009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:23.525 [2024-10-09 14:06:30.026027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.525 [2024-10-09 14:06:30.029685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.525 [2024-10-09 14:06:30.029735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:23.525 BaseBdev2 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.525 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.525 BaseBdev3_malloc 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.526 [2024-10-09 14:06:30.050995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:23.526 [2024-10-09 14:06:30.051046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.526 [2024-10-09 14:06:30.051076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:23.526 [2024-10-09 14:06:30.051087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.526 [2024-10-09 14:06:30.053479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.526 [2024-10-09 14:06:30.053517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:23.526 BaseBdev3 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.526 BaseBdev4_malloc 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.526 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.526 [2024-10-09 14:06:30.072021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:37:23.526 [2024-10-09 14:06:30.072077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.526 [2024-10-09 14:06:30.072105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:23.526 [2024-10-09 14:06:30.072132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.785 [2024-10-09 14:06:30.074807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.785 [2024-10-09 14:06:30.074842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:23.785 BaseBdev4 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.785 spare_malloc 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.785 spare_delay 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.785 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.785 [2024-10-09 14:06:30.100999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:23.785 [2024-10-09 14:06:30.101054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.785 [2024-10-09 14:06:30.101077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:23.785 [2024-10-09 14:06:30.101088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.785 [2024-10-09 14:06:30.103654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.786 [2024-10-09 14:06:30.103690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:23.786 spare 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.786 [2024-10-09 14:06:30.109098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:23.786 [2024-10-09 14:06:30.111239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:23.786 [2024-10-09 14:06:30.111306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:23.786 [2024-10-09 14:06:30.111347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:23.786 [2024-10-09 14:06:30.111429] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:37:23.786 [2024-10-09 14:06:30.111445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:37:23.786 [2024-10-09 14:06:30.111727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:23.786 [2024-10-09 14:06:30.112184] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:37:23.786 [2024-10-09 14:06:30.112206] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:37:23.786 [2024-10-09 14:06:30.112329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:23.786 "name": "raid_bdev1", 00:37:23.786 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:23.786 "strip_size_kb": 64, 00:37:23.786 "state": "online", 00:37:23.786 "raid_level": "raid5f", 00:37:23.786 "superblock": false, 00:37:23.786 "num_base_bdevs": 4, 00:37:23.786 "num_base_bdevs_discovered": 4, 00:37:23.786 "num_base_bdevs_operational": 4, 00:37:23.786 "base_bdevs_list": [ 00:37:23.786 { 00:37:23.786 "name": "BaseBdev1", 00:37:23.786 "uuid": "a86c3206-c0f9-53ec-bcb8-7bb19e37f8e6", 00:37:23.786 "is_configured": true, 00:37:23.786 "data_offset": 0, 00:37:23.786 "data_size": 65536 00:37:23.786 }, 00:37:23.786 { 00:37:23.786 "name": "BaseBdev2", 00:37:23.786 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:23.786 "is_configured": true, 00:37:23.786 "data_offset": 0, 00:37:23.786 "data_size": 65536 00:37:23.786 }, 00:37:23.786 { 00:37:23.786 "name": "BaseBdev3", 00:37:23.786 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:23.786 "is_configured": true, 00:37:23.786 "data_offset": 0, 00:37:23.786 "data_size": 65536 00:37:23.786 }, 00:37:23.786 { 00:37:23.786 "name": "BaseBdev4", 00:37:23.786 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:23.786 "is_configured": true, 00:37:23.786 "data_offset": 0, 00:37:23.786 "data_size": 65536 00:37:23.786 } 00:37:23.786 ] 00:37:23.786 }' 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:23.786 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.045 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:24.045 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:24.045 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.045 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.045 [2024-10-09 14:06:30.566532] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:24.045 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:24.304 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:24.562 [2024-10-09 14:06:30.902471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:24.562 /dev/nbd0 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:24.562 1+0 records in 00:37:24.562 1+0 records out 00:37:24.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322065 s, 12.7 MB/s 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:24.562 14:06:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:37:24.563 14:06:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:37:25.129 512+0 records in 00:37:25.130 512+0 records out 00:37:25.130 100663296 bytes (101 MB, 96 MiB) copied, 0.492377 s, 204 MB/s 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:25.130 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:25.388 [2024-10-09 14:06:31.745993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 [2024-10-09 14:06:31.758082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:25.388 "name": "raid_bdev1", 00:37:25.388 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:25.388 "strip_size_kb": 64, 00:37:25.388 "state": "online", 00:37:25.388 "raid_level": "raid5f", 00:37:25.388 "superblock": false, 00:37:25.388 "num_base_bdevs": 4, 00:37:25.388 "num_base_bdevs_discovered": 3, 00:37:25.388 "num_base_bdevs_operational": 3, 00:37:25.388 "base_bdevs_list": [ 00:37:25.388 { 00:37:25.388 "name": null, 00:37:25.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.388 "is_configured": false, 00:37:25.388 "data_offset": 0, 00:37:25.388 "data_size": 65536 00:37:25.388 }, 00:37:25.388 { 00:37:25.388 "name": "BaseBdev2", 00:37:25.388 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:25.388 "is_configured": true, 00:37:25.388 "data_offset": 0, 00:37:25.388 "data_size": 65536 00:37:25.388 }, 00:37:25.388 { 00:37:25.388 "name": "BaseBdev3", 00:37:25.388 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:25.388 "is_configured": true, 00:37:25.388 "data_offset": 0, 00:37:25.388 "data_size": 65536 00:37:25.388 }, 00:37:25.388 { 00:37:25.388 "name": "BaseBdev4", 00:37:25.388 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:25.388 "is_configured": true, 00:37:25.388 "data_offset": 0, 00:37:25.388 "data_size": 65536 00:37:25.388 } 00:37:25.388 ] 00:37:25.388 }' 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:25.388 14:06:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.647 14:06:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:25.647 14:06:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:25.647 14:06:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:25.647 [2024-10-09 14:06:32.170246] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:25.647 [2024-10-09 14:06:32.173730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:37:25.647 [2024-10-09 14:06:32.176909] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:25.647 14:06:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:25.647 14:06:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:27.070 "name": "raid_bdev1", 00:37:27.070 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:27.070 "strip_size_kb": 64, 00:37:27.070 "state": "online", 00:37:27.070 "raid_level": "raid5f", 00:37:27.070 "superblock": false, 00:37:27.070 "num_base_bdevs": 4, 00:37:27.070 "num_base_bdevs_discovered": 4, 00:37:27.070 "num_base_bdevs_operational": 4, 00:37:27.070 "process": { 00:37:27.070 "type": "rebuild", 00:37:27.070 "target": "spare", 00:37:27.070 "progress": { 00:37:27.070 "blocks": 19200, 00:37:27.070 "percent": 9 00:37:27.070 } 00:37:27.070 }, 00:37:27.070 "base_bdevs_list": [ 00:37:27.070 { 00:37:27.070 "name": "spare", 00:37:27.070 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:27.070 "is_configured": true, 00:37:27.070 "data_offset": 0, 00:37:27.070 "data_size": 65536 00:37:27.070 }, 00:37:27.070 { 00:37:27.070 "name": "BaseBdev2", 00:37:27.070 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:27.070 "is_configured": true, 00:37:27.070 "data_offset": 0, 00:37:27.070 "data_size": 65536 00:37:27.070 }, 00:37:27.070 { 00:37:27.070 "name": "BaseBdev3", 00:37:27.070 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:27.070 "is_configured": true, 00:37:27.070 "data_offset": 0, 00:37:27.070 "data_size": 65536 00:37:27.070 }, 00:37:27.070 { 00:37:27.070 "name": "BaseBdev4", 00:37:27.070 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:27.070 "is_configured": true, 00:37:27.070 "data_offset": 0, 00:37:27.070 "data_size": 65536 00:37:27.070 } 00:37:27.070 ] 00:37:27.070 }' 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.070 [2024-10-09 14:06:33.329733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:27.070 [2024-10-09 14:06:33.385532] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:27.070 [2024-10-09 14:06:33.385618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:27.070 [2024-10-09 14:06:33.385647] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:27.070 [2024-10-09 14:06:33.385658] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:27.070 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:27.071 "name": "raid_bdev1", 00:37:27.071 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:27.071 "strip_size_kb": 64, 00:37:27.071 "state": "online", 00:37:27.071 "raid_level": "raid5f", 00:37:27.071 "superblock": false, 00:37:27.071 "num_base_bdevs": 4, 00:37:27.071 "num_base_bdevs_discovered": 3, 00:37:27.071 "num_base_bdevs_operational": 3, 00:37:27.071 "base_bdevs_list": [ 00:37:27.071 { 00:37:27.071 "name": null, 00:37:27.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.071 "is_configured": false, 00:37:27.071 "data_offset": 0, 00:37:27.071 "data_size": 65536 00:37:27.071 }, 00:37:27.071 { 00:37:27.071 "name": "BaseBdev2", 00:37:27.071 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:27.071 "is_configured": true, 00:37:27.071 "data_offset": 0, 00:37:27.071 "data_size": 65536 00:37:27.071 }, 00:37:27.071 { 00:37:27.071 "name": "BaseBdev3", 00:37:27.071 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:27.071 "is_configured": true, 00:37:27.071 "data_offset": 0, 00:37:27.071 "data_size": 65536 00:37:27.071 }, 00:37:27.071 { 00:37:27.071 "name": "BaseBdev4", 00:37:27.071 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:27.071 "is_configured": true, 00:37:27.071 "data_offset": 0, 00:37:27.071 "data_size": 65536 00:37:27.071 } 00:37:27.071 ] 00:37:27.071 }' 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:27.071 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.329 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:27.587 "name": "raid_bdev1", 00:37:27.587 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:27.587 "strip_size_kb": 64, 00:37:27.587 "state": "online", 00:37:27.587 "raid_level": "raid5f", 00:37:27.587 "superblock": false, 00:37:27.587 "num_base_bdevs": 4, 00:37:27.587 "num_base_bdevs_discovered": 3, 00:37:27.587 "num_base_bdevs_operational": 3, 00:37:27.587 "base_bdevs_list": [ 00:37:27.587 { 00:37:27.587 "name": null, 00:37:27.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.587 "is_configured": false, 00:37:27.587 "data_offset": 0, 00:37:27.587 "data_size": 65536 00:37:27.587 }, 00:37:27.587 { 00:37:27.587 "name": "BaseBdev2", 00:37:27.587 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:27.587 "is_configured": true, 00:37:27.587 "data_offset": 0, 00:37:27.587 "data_size": 65536 00:37:27.587 }, 00:37:27.587 { 00:37:27.587 "name": "BaseBdev3", 00:37:27.587 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:27.587 "is_configured": true, 00:37:27.587 "data_offset": 0, 00:37:27.587 "data_size": 65536 00:37:27.587 }, 00:37:27.587 { 00:37:27.587 "name": "BaseBdev4", 00:37:27.587 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:27.587 "is_configured": true, 00:37:27.587 "data_offset": 0, 00:37:27.587 "data_size": 65536 00:37:27.587 } 00:37:27.587 ] 00:37:27.587 }' 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:27.587 14:06:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:27.588 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:27.588 14:06:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:27.588 [2024-10-09 14:06:33.999601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:27.588 [2024-10-09 14:06:34.002923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:37:27.588 [2024-10-09 14:06:34.005445] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:27.588 14:06:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:27.588 14:06:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:28.523 "name": "raid_bdev1", 00:37:28.523 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:28.523 "strip_size_kb": 64, 00:37:28.523 "state": "online", 00:37:28.523 "raid_level": "raid5f", 00:37:28.523 "superblock": false, 00:37:28.523 "num_base_bdevs": 4, 00:37:28.523 "num_base_bdevs_discovered": 4, 00:37:28.523 "num_base_bdevs_operational": 4, 00:37:28.523 "process": { 00:37:28.523 "type": "rebuild", 00:37:28.523 "target": "spare", 00:37:28.523 "progress": { 00:37:28.523 "blocks": 19200, 00:37:28.523 "percent": 9 00:37:28.523 } 00:37:28.523 }, 00:37:28.523 "base_bdevs_list": [ 00:37:28.523 { 00:37:28.523 "name": "spare", 00:37:28.523 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:28.523 "is_configured": true, 00:37:28.523 "data_offset": 0, 00:37:28.523 "data_size": 65536 00:37:28.523 }, 00:37:28.523 { 00:37:28.523 "name": "BaseBdev2", 00:37:28.523 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:28.523 "is_configured": true, 00:37:28.523 "data_offset": 0, 00:37:28.523 "data_size": 65536 00:37:28.523 }, 00:37:28.523 { 00:37:28.523 "name": "BaseBdev3", 00:37:28.523 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:28.523 "is_configured": true, 00:37:28.523 "data_offset": 0, 00:37:28.523 "data_size": 65536 00:37:28.523 }, 00:37:28.523 { 00:37:28.523 "name": "BaseBdev4", 00:37:28.523 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:28.523 "is_configured": true, 00:37:28.523 "data_offset": 0, 00:37:28.523 "data_size": 65536 00:37:28.523 } 00:37:28.523 ] 00:37:28.523 }' 00:37:28.523 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=529 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:28.783 "name": "raid_bdev1", 00:37:28.783 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:28.783 "strip_size_kb": 64, 00:37:28.783 "state": "online", 00:37:28.783 "raid_level": "raid5f", 00:37:28.783 "superblock": false, 00:37:28.783 "num_base_bdevs": 4, 00:37:28.783 "num_base_bdevs_discovered": 4, 00:37:28.783 "num_base_bdevs_operational": 4, 00:37:28.783 "process": { 00:37:28.783 "type": "rebuild", 00:37:28.783 "target": "spare", 00:37:28.783 "progress": { 00:37:28.783 "blocks": 21120, 00:37:28.783 "percent": 10 00:37:28.783 } 00:37:28.783 }, 00:37:28.783 "base_bdevs_list": [ 00:37:28.783 { 00:37:28.783 "name": "spare", 00:37:28.783 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:28.783 "is_configured": true, 00:37:28.783 "data_offset": 0, 00:37:28.783 "data_size": 65536 00:37:28.783 }, 00:37:28.783 { 00:37:28.783 "name": "BaseBdev2", 00:37:28.783 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:28.783 "is_configured": true, 00:37:28.783 "data_offset": 0, 00:37:28.783 "data_size": 65536 00:37:28.783 }, 00:37:28.783 { 00:37:28.783 "name": "BaseBdev3", 00:37:28.783 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:28.783 "is_configured": true, 00:37:28.783 "data_offset": 0, 00:37:28.783 "data_size": 65536 00:37:28.783 }, 00:37:28.783 { 00:37:28.783 "name": "BaseBdev4", 00:37:28.783 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:28.783 "is_configured": true, 00:37:28.783 "data_offset": 0, 00:37:28.783 "data_size": 65536 00:37:28.783 } 00:37:28.783 ] 00:37:28.783 }' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:28.783 14:06:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:30.159 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:30.159 "name": "raid_bdev1", 00:37:30.159 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:30.159 "strip_size_kb": 64, 00:37:30.159 "state": "online", 00:37:30.159 "raid_level": "raid5f", 00:37:30.159 "superblock": false, 00:37:30.159 "num_base_bdevs": 4, 00:37:30.159 "num_base_bdevs_discovered": 4, 00:37:30.159 "num_base_bdevs_operational": 4, 00:37:30.159 "process": { 00:37:30.159 "type": "rebuild", 00:37:30.159 "target": "spare", 00:37:30.159 "progress": { 00:37:30.159 "blocks": 42240, 00:37:30.159 "percent": 21 00:37:30.159 } 00:37:30.159 }, 00:37:30.159 "base_bdevs_list": [ 00:37:30.159 { 00:37:30.159 "name": "spare", 00:37:30.159 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:30.159 "is_configured": true, 00:37:30.159 "data_offset": 0, 00:37:30.159 "data_size": 65536 00:37:30.159 }, 00:37:30.159 { 00:37:30.159 "name": "BaseBdev2", 00:37:30.159 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:30.159 "is_configured": true, 00:37:30.159 "data_offset": 0, 00:37:30.159 "data_size": 65536 00:37:30.159 }, 00:37:30.159 { 00:37:30.159 "name": "BaseBdev3", 00:37:30.159 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:30.160 "is_configured": true, 00:37:30.160 "data_offset": 0, 00:37:30.160 "data_size": 65536 00:37:30.160 }, 00:37:30.160 { 00:37:30.160 "name": "BaseBdev4", 00:37:30.160 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:30.160 "is_configured": true, 00:37:30.160 "data_offset": 0, 00:37:30.160 "data_size": 65536 00:37:30.160 } 00:37:30.160 ] 00:37:30.160 }' 00:37:30.160 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:30.160 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:30.160 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:30.160 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:30.160 14:06:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:31.096 "name": "raid_bdev1", 00:37:31.096 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:31.096 "strip_size_kb": 64, 00:37:31.096 "state": "online", 00:37:31.096 "raid_level": "raid5f", 00:37:31.096 "superblock": false, 00:37:31.096 "num_base_bdevs": 4, 00:37:31.096 "num_base_bdevs_discovered": 4, 00:37:31.096 "num_base_bdevs_operational": 4, 00:37:31.096 "process": { 00:37:31.096 "type": "rebuild", 00:37:31.096 "target": "spare", 00:37:31.096 "progress": { 00:37:31.096 "blocks": 65280, 00:37:31.096 "percent": 33 00:37:31.096 } 00:37:31.096 }, 00:37:31.096 "base_bdevs_list": [ 00:37:31.096 { 00:37:31.096 "name": "spare", 00:37:31.096 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:31.096 "is_configured": true, 00:37:31.096 "data_offset": 0, 00:37:31.096 "data_size": 65536 00:37:31.096 }, 00:37:31.096 { 00:37:31.096 "name": "BaseBdev2", 00:37:31.096 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:31.096 "is_configured": true, 00:37:31.096 "data_offset": 0, 00:37:31.096 "data_size": 65536 00:37:31.096 }, 00:37:31.096 { 00:37:31.096 "name": "BaseBdev3", 00:37:31.096 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:31.096 "is_configured": true, 00:37:31.096 "data_offset": 0, 00:37:31.096 "data_size": 65536 00:37:31.096 }, 00:37:31.096 { 00:37:31.096 "name": "BaseBdev4", 00:37:31.096 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:31.096 "is_configured": true, 00:37:31.096 "data_offset": 0, 00:37:31.096 "data_size": 65536 00:37:31.096 } 00:37:31.096 ] 00:37:31.096 }' 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:31.096 14:06:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:32.033 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:32.292 "name": "raid_bdev1", 00:37:32.292 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:32.292 "strip_size_kb": 64, 00:37:32.292 "state": "online", 00:37:32.292 "raid_level": "raid5f", 00:37:32.292 "superblock": false, 00:37:32.292 "num_base_bdevs": 4, 00:37:32.292 "num_base_bdevs_discovered": 4, 00:37:32.292 "num_base_bdevs_operational": 4, 00:37:32.292 "process": { 00:37:32.292 "type": "rebuild", 00:37:32.292 "target": "spare", 00:37:32.292 "progress": { 00:37:32.292 "blocks": 86400, 00:37:32.292 "percent": 43 00:37:32.292 } 00:37:32.292 }, 00:37:32.292 "base_bdevs_list": [ 00:37:32.292 { 00:37:32.292 "name": "spare", 00:37:32.292 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:32.292 "is_configured": true, 00:37:32.292 "data_offset": 0, 00:37:32.292 "data_size": 65536 00:37:32.292 }, 00:37:32.292 { 00:37:32.292 "name": "BaseBdev2", 00:37:32.292 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:32.292 "is_configured": true, 00:37:32.292 "data_offset": 0, 00:37:32.292 "data_size": 65536 00:37:32.292 }, 00:37:32.292 { 00:37:32.292 "name": "BaseBdev3", 00:37:32.292 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:32.292 "is_configured": true, 00:37:32.292 "data_offset": 0, 00:37:32.292 "data_size": 65536 00:37:32.292 }, 00:37:32.292 { 00:37:32.292 "name": "BaseBdev4", 00:37:32.292 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:32.292 "is_configured": true, 00:37:32.292 "data_offset": 0, 00:37:32.292 "data_size": 65536 00:37:32.292 } 00:37:32.292 ] 00:37:32.292 }' 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:32.292 14:06:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:33.229 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:33.230 "name": "raid_bdev1", 00:37:33.230 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:33.230 "strip_size_kb": 64, 00:37:33.230 "state": "online", 00:37:33.230 "raid_level": "raid5f", 00:37:33.230 "superblock": false, 00:37:33.230 "num_base_bdevs": 4, 00:37:33.230 "num_base_bdevs_discovered": 4, 00:37:33.230 "num_base_bdevs_operational": 4, 00:37:33.230 "process": { 00:37:33.230 "type": "rebuild", 00:37:33.230 "target": "spare", 00:37:33.230 "progress": { 00:37:33.230 "blocks": 107520, 00:37:33.230 "percent": 54 00:37:33.230 } 00:37:33.230 }, 00:37:33.230 "base_bdevs_list": [ 00:37:33.230 { 00:37:33.230 "name": "spare", 00:37:33.230 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:33.230 "is_configured": true, 00:37:33.230 "data_offset": 0, 00:37:33.230 "data_size": 65536 00:37:33.230 }, 00:37:33.230 { 00:37:33.230 "name": "BaseBdev2", 00:37:33.230 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:33.230 "is_configured": true, 00:37:33.230 "data_offset": 0, 00:37:33.230 "data_size": 65536 00:37:33.230 }, 00:37:33.230 { 00:37:33.230 "name": "BaseBdev3", 00:37:33.230 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:33.230 "is_configured": true, 00:37:33.230 "data_offset": 0, 00:37:33.230 "data_size": 65536 00:37:33.230 }, 00:37:33.230 { 00:37:33.230 "name": "BaseBdev4", 00:37:33.230 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:33.230 "is_configured": true, 00:37:33.230 "data_offset": 0, 00:37:33.230 "data_size": 65536 00:37:33.230 } 00:37:33.230 ] 00:37:33.230 }' 00:37:33.230 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:33.489 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:33.489 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:33.489 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:33.489 14:06:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:34.426 "name": "raid_bdev1", 00:37:34.426 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:34.426 "strip_size_kb": 64, 00:37:34.426 "state": "online", 00:37:34.426 "raid_level": "raid5f", 00:37:34.426 "superblock": false, 00:37:34.426 "num_base_bdevs": 4, 00:37:34.426 "num_base_bdevs_discovered": 4, 00:37:34.426 "num_base_bdevs_operational": 4, 00:37:34.426 "process": { 00:37:34.426 "type": "rebuild", 00:37:34.426 "target": "spare", 00:37:34.426 "progress": { 00:37:34.426 "blocks": 128640, 00:37:34.426 "percent": 65 00:37:34.426 } 00:37:34.426 }, 00:37:34.426 "base_bdevs_list": [ 00:37:34.426 { 00:37:34.426 "name": "spare", 00:37:34.426 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:34.426 "is_configured": true, 00:37:34.426 "data_offset": 0, 00:37:34.426 "data_size": 65536 00:37:34.426 }, 00:37:34.426 { 00:37:34.426 "name": "BaseBdev2", 00:37:34.426 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:34.426 "is_configured": true, 00:37:34.426 "data_offset": 0, 00:37:34.426 "data_size": 65536 00:37:34.426 }, 00:37:34.426 { 00:37:34.426 "name": "BaseBdev3", 00:37:34.426 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:34.426 "is_configured": true, 00:37:34.426 "data_offset": 0, 00:37:34.426 "data_size": 65536 00:37:34.426 }, 00:37:34.426 { 00:37:34.426 "name": "BaseBdev4", 00:37:34.426 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:34.426 "is_configured": true, 00:37:34.426 "data_offset": 0, 00:37:34.426 "data_size": 65536 00:37:34.426 } 00:37:34.426 ] 00:37:34.426 }' 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:34.426 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:34.685 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:34.685 14:06:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:35.621 14:06:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:35.622 "name": "raid_bdev1", 00:37:35.622 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:35.622 "strip_size_kb": 64, 00:37:35.622 "state": "online", 00:37:35.622 "raid_level": "raid5f", 00:37:35.622 "superblock": false, 00:37:35.622 "num_base_bdevs": 4, 00:37:35.622 "num_base_bdevs_discovered": 4, 00:37:35.622 "num_base_bdevs_operational": 4, 00:37:35.622 "process": { 00:37:35.622 "type": "rebuild", 00:37:35.622 "target": "spare", 00:37:35.622 "progress": { 00:37:35.622 "blocks": 151680, 00:37:35.622 "percent": 77 00:37:35.622 } 00:37:35.622 }, 00:37:35.622 "base_bdevs_list": [ 00:37:35.622 { 00:37:35.622 "name": "spare", 00:37:35.622 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:35.622 "is_configured": true, 00:37:35.622 "data_offset": 0, 00:37:35.622 "data_size": 65536 00:37:35.622 }, 00:37:35.622 { 00:37:35.622 "name": "BaseBdev2", 00:37:35.622 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:35.622 "is_configured": true, 00:37:35.622 "data_offset": 0, 00:37:35.622 "data_size": 65536 00:37:35.622 }, 00:37:35.622 { 00:37:35.622 "name": "BaseBdev3", 00:37:35.622 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:35.622 "is_configured": true, 00:37:35.622 "data_offset": 0, 00:37:35.622 "data_size": 65536 00:37:35.622 }, 00:37:35.622 { 00:37:35.622 "name": "BaseBdev4", 00:37:35.622 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:35.622 "is_configured": true, 00:37:35.622 "data_offset": 0, 00:37:35.622 "data_size": 65536 00:37:35.622 } 00:37:35.622 ] 00:37:35.622 }' 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:35.622 14:06:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:36.999 "name": "raid_bdev1", 00:37:36.999 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:36.999 "strip_size_kb": 64, 00:37:36.999 "state": "online", 00:37:36.999 "raid_level": "raid5f", 00:37:36.999 "superblock": false, 00:37:36.999 "num_base_bdevs": 4, 00:37:36.999 "num_base_bdevs_discovered": 4, 00:37:36.999 "num_base_bdevs_operational": 4, 00:37:36.999 "process": { 00:37:36.999 "type": "rebuild", 00:37:36.999 "target": "spare", 00:37:36.999 "progress": { 00:37:36.999 "blocks": 172800, 00:37:36.999 "percent": 87 00:37:36.999 } 00:37:36.999 }, 00:37:36.999 "base_bdevs_list": [ 00:37:36.999 { 00:37:36.999 "name": "spare", 00:37:36.999 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:36.999 "is_configured": true, 00:37:36.999 "data_offset": 0, 00:37:36.999 "data_size": 65536 00:37:36.999 }, 00:37:36.999 { 00:37:36.999 "name": "BaseBdev2", 00:37:36.999 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:36.999 "is_configured": true, 00:37:36.999 "data_offset": 0, 00:37:36.999 "data_size": 65536 00:37:36.999 }, 00:37:36.999 { 00:37:36.999 "name": "BaseBdev3", 00:37:36.999 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:36.999 "is_configured": true, 00:37:36.999 "data_offset": 0, 00:37:36.999 "data_size": 65536 00:37:36.999 }, 00:37:36.999 { 00:37:36.999 "name": "BaseBdev4", 00:37:36.999 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:36.999 "is_configured": true, 00:37:36.999 "data_offset": 0, 00:37:36.999 "data_size": 65536 00:37:36.999 } 00:37:36.999 ] 00:37:36.999 }' 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:36.999 14:06:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:37.976 "name": "raid_bdev1", 00:37:37.976 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:37.976 "strip_size_kb": 64, 00:37:37.976 "state": "online", 00:37:37.976 "raid_level": "raid5f", 00:37:37.976 "superblock": false, 00:37:37.976 "num_base_bdevs": 4, 00:37:37.976 "num_base_bdevs_discovered": 4, 00:37:37.976 "num_base_bdevs_operational": 4, 00:37:37.976 "process": { 00:37:37.976 "type": "rebuild", 00:37:37.976 "target": "spare", 00:37:37.976 "progress": { 00:37:37.976 "blocks": 193920, 00:37:37.976 "percent": 98 00:37:37.976 } 00:37:37.976 }, 00:37:37.976 "base_bdevs_list": [ 00:37:37.976 { 00:37:37.976 "name": "spare", 00:37:37.976 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:37.976 "is_configured": true, 00:37:37.976 "data_offset": 0, 00:37:37.976 "data_size": 65536 00:37:37.976 }, 00:37:37.976 { 00:37:37.976 "name": "BaseBdev2", 00:37:37.976 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:37.976 "is_configured": true, 00:37:37.976 "data_offset": 0, 00:37:37.976 "data_size": 65536 00:37:37.976 }, 00:37:37.976 { 00:37:37.976 "name": "BaseBdev3", 00:37:37.976 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:37.976 "is_configured": true, 00:37:37.976 "data_offset": 0, 00:37:37.976 "data_size": 65536 00:37:37.976 }, 00:37:37.976 { 00:37:37.976 "name": "BaseBdev4", 00:37:37.976 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:37.976 "is_configured": true, 00:37:37.976 "data_offset": 0, 00:37:37.976 "data_size": 65536 00:37:37.976 } 00:37:37.976 ] 00:37:37.976 }' 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:37.976 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:37.976 [2024-10-09 14:06:44.374624] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:37.976 [2024-10-09 14:06:44.374717] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:37.976 [2024-10-09 14:06:44.374758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:37.977 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:37.977 14:06:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:38.914 "name": "raid_bdev1", 00:37:38.914 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:38.914 "strip_size_kb": 64, 00:37:38.914 "state": "online", 00:37:38.914 "raid_level": "raid5f", 00:37:38.914 "superblock": false, 00:37:38.914 "num_base_bdevs": 4, 00:37:38.914 "num_base_bdevs_discovered": 4, 00:37:38.914 "num_base_bdevs_operational": 4, 00:37:38.914 "base_bdevs_list": [ 00:37:38.914 { 00:37:38.914 "name": "spare", 00:37:38.914 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:38.914 "is_configured": true, 00:37:38.914 "data_offset": 0, 00:37:38.914 "data_size": 65536 00:37:38.914 }, 00:37:38.914 { 00:37:38.914 "name": "BaseBdev2", 00:37:38.914 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:38.914 "is_configured": true, 00:37:38.914 "data_offset": 0, 00:37:38.914 "data_size": 65536 00:37:38.914 }, 00:37:38.914 { 00:37:38.914 "name": "BaseBdev3", 00:37:38.914 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:38.914 "is_configured": true, 00:37:38.914 "data_offset": 0, 00:37:38.914 "data_size": 65536 00:37:38.914 }, 00:37:38.914 { 00:37:38.914 "name": "BaseBdev4", 00:37:38.914 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:38.914 "is_configured": true, 00:37:38.914 "data_offset": 0, 00:37:38.914 "data_size": 65536 00:37:38.914 } 00:37:38.914 ] 00:37:38.914 }' 00:37:38.914 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.173 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:39.173 "name": "raid_bdev1", 00:37:39.173 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:39.173 "strip_size_kb": 64, 00:37:39.173 "state": "online", 00:37:39.173 "raid_level": "raid5f", 00:37:39.173 "superblock": false, 00:37:39.173 "num_base_bdevs": 4, 00:37:39.173 "num_base_bdevs_discovered": 4, 00:37:39.173 "num_base_bdevs_operational": 4, 00:37:39.174 "base_bdevs_list": [ 00:37:39.174 { 00:37:39.174 "name": "spare", 00:37:39.174 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev2", 00:37:39.174 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev3", 00:37:39.174 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev4", 00:37:39.174 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 } 00:37:39.174 ] 00:37:39.174 }' 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:39.174 "name": "raid_bdev1", 00:37:39.174 "uuid": "00d71be8-e629-424c-b102-35f11d96e7b8", 00:37:39.174 "strip_size_kb": 64, 00:37:39.174 "state": "online", 00:37:39.174 "raid_level": "raid5f", 00:37:39.174 "superblock": false, 00:37:39.174 "num_base_bdevs": 4, 00:37:39.174 "num_base_bdevs_discovered": 4, 00:37:39.174 "num_base_bdevs_operational": 4, 00:37:39.174 "base_bdevs_list": [ 00:37:39.174 { 00:37:39.174 "name": "spare", 00:37:39.174 "uuid": "075b273f-deb6-59e2-b281-4194b2aa5405", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev2", 00:37:39.174 "uuid": "e68bf54d-94a4-5032-a4f1-a23a689eb844", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev3", 00:37:39.174 "uuid": "e280dbea-541b-526a-b764-ebadec73584a", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 }, 00:37:39.174 { 00:37:39.174 "name": "BaseBdev4", 00:37:39.174 "uuid": "0d1d7d39-9e71-50ba-b14e-b964a59c7eec", 00:37:39.174 "is_configured": true, 00:37:39.174 "data_offset": 0, 00:37:39.174 "data_size": 65536 00:37:39.174 } 00:37:39.174 ] 00:37:39.174 }' 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:39.174 14:06:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.742 [2024-10-09 14:06:46.129205] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:39.742 [2024-10-09 14:06:46.129237] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:39.742 [2024-10-09 14:06:46.129340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:39.742 [2024-10-09 14:06:46.129429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:39.742 [2024-10-09 14:06:46.129444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:39.742 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:40.002 /dev/nbd0 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:40.002 1+0 records in 00:37:40.002 1+0 records out 00:37:40.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228175 s, 18.0 MB/s 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:40.002 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:40.261 /dev/nbd1 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:40.261 1+0 records in 00:37:40.261 1+0 records out 00:37:40.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353884 s, 11.6 MB/s 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:40.261 14:06:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95446 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95446 ']' 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95446 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95446 00:37:40.829 killing process with pid 95446 00:37:40.829 Received shutdown signal, test time was about 60.000000 seconds 00:37:40.829 00:37:40.829 Latency(us) 00:37:40.829 [2024-10-09T14:06:47.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:40.829 [2024-10-09T14:06:47.380Z] =================================================================================================================== 00:37:40.829 [2024-10-09T14:06:47.380Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95446' 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95446 00:37:40.829 [2024-10-09 14:06:47.349588] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:40.829 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95446 00:37:41.088 [2024-10-09 14:06:47.400428] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:41.088 14:06:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:37:41.088 00:37:41.088 real 0m18.706s 00:37:41.088 user 0m22.670s 00:37:41.088 sys 0m2.587s 00:37:41.088 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:41.088 ************************************ 00:37:41.088 END TEST raid5f_rebuild_test 00:37:41.088 14:06:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:37:41.088 ************************************ 00:37:41.347 14:06:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:37:41.347 14:06:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:37:41.347 14:06:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.347 14:06:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:41.347 ************************************ 00:37:41.347 START TEST raid5f_rebuild_test_sb 00:37:41.347 ************************************ 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:41.347 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95951 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95951 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95951 ']' 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:41.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:41.348 14:06:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:41.348 [2024-10-09 14:06:47.822638] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:37:41.348 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:41.348 Zero copy mechanism will not be used. 00:37:41.348 [2024-10-09 14:06:47.822817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95951 ] 00:37:41.607 [2024-10-09 14:06:48.001569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.607 [2024-10-09 14:06:48.047851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.607 [2024-10-09 14:06:48.091048] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:41.607 [2024-10-09 14:06:48.091084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.544 BaseBdev1_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.544 [2024-10-09 14:06:48.763064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:42.544 [2024-10-09 14:06:48.763128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.544 [2024-10-09 14:06:48.763163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:42.544 [2024-10-09 14:06:48.763183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.544 [2024-10-09 14:06:48.765665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.544 [2024-10-09 14:06:48.765702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:42.544 BaseBdev1 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.544 BaseBdev2_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.544 [2024-10-09 14:06:48.803016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:42.544 [2024-10-09 14:06:48.803070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.544 [2024-10-09 14:06:48.803094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:42.544 [2024-10-09 14:06:48.803105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.544 [2024-10-09 14:06:48.805489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.544 [2024-10-09 14:06:48.805527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:42.544 BaseBdev2 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.544 BaseBdev3_malloc 00:37:42.544 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 [2024-10-09 14:06:48.832060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:37:42.545 [2024-10-09 14:06:48.832112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.545 [2024-10-09 14:06:48.832140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:42.545 [2024-10-09 14:06:48.832151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.545 [2024-10-09 14:06:48.834547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.545 [2024-10-09 14:06:48.834595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:37:42.545 BaseBdev3 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 BaseBdev4_malloc 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 [2024-10-09 14:06:48.861024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:37:42.545 [2024-10-09 14:06:48.861082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.545 [2024-10-09 14:06:48.861110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:42.545 [2024-10-09 14:06:48.861121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.545 [2024-10-09 14:06:48.863517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.545 [2024-10-09 14:06:48.863572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:37:42.545 BaseBdev4 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 spare_malloc 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 spare_delay 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 [2024-10-09 14:06:48.902005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:42.545 [2024-10-09 14:06:48.902060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:42.545 [2024-10-09 14:06:48.902084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:42.545 [2024-10-09 14:06:48.902096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:42.545 [2024-10-09 14:06:48.904488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:42.545 [2024-10-09 14:06:48.904526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:42.545 spare 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 [2024-10-09 14:06:48.910109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:42.545 [2024-10-09 14:06:48.912271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:42.545 [2024-10-09 14:06:48.912355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:42.545 [2024-10-09 14:06:48.912396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:42.545 [2024-10-09 14:06:48.912572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:37:42.545 [2024-10-09 14:06:48.912584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:42.545 [2024-10-09 14:06:48.912843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:42.545 [2024-10-09 14:06:48.913283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:37:42.545 [2024-10-09 14:06:48.913298] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:37:42.545 [2024-10-09 14:06:48.913430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:42.545 "name": "raid_bdev1", 00:37:42.545 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:42.545 "strip_size_kb": 64, 00:37:42.545 "state": "online", 00:37:42.545 "raid_level": "raid5f", 00:37:42.545 "superblock": true, 00:37:42.545 "num_base_bdevs": 4, 00:37:42.545 "num_base_bdevs_discovered": 4, 00:37:42.545 "num_base_bdevs_operational": 4, 00:37:42.545 "base_bdevs_list": [ 00:37:42.545 { 00:37:42.545 "name": "BaseBdev1", 00:37:42.545 "uuid": "5a29601a-2f13-5f2a-82b6-dfeca06a123f", 00:37:42.545 "is_configured": true, 00:37:42.545 "data_offset": 2048, 00:37:42.545 "data_size": 63488 00:37:42.545 }, 00:37:42.545 { 00:37:42.545 "name": "BaseBdev2", 00:37:42.545 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:42.545 "is_configured": true, 00:37:42.545 "data_offset": 2048, 00:37:42.545 "data_size": 63488 00:37:42.545 }, 00:37:42.545 { 00:37:42.545 "name": "BaseBdev3", 00:37:42.545 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:42.545 "is_configured": true, 00:37:42.545 "data_offset": 2048, 00:37:42.545 "data_size": 63488 00:37:42.545 }, 00:37:42.545 { 00:37:42.545 "name": "BaseBdev4", 00:37:42.545 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:42.545 "is_configured": true, 00:37:42.545 "data_offset": 2048, 00:37:42.545 "data_size": 63488 00:37:42.545 } 00:37:42.545 ] 00:37:42.545 }' 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:42.545 14:06:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.113 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:43.113 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.114 [2024-10-09 14:06:49.363650] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:43.114 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:43.373 [2024-10-09 14:06:49.699593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:43.373 /dev/nbd0 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:43.373 1+0 records in 00:37:43.373 1+0 records out 00:37:43.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290107 s, 14.1 MB/s 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:37:43.373 14:06:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:37:43.940 496+0 records in 00:37:43.940 496+0 records out 00:37:43.940 97517568 bytes (98 MB, 93 MiB) copied, 0.474679 s, 205 MB/s 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:43.940 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:44.199 [2024-10-09 14:06:50.509047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.199 [2024-10-09 14:06:50.521123] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.199 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:44.199 "name": "raid_bdev1", 00:37:44.199 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:44.199 "strip_size_kb": 64, 00:37:44.199 "state": "online", 00:37:44.199 "raid_level": "raid5f", 00:37:44.199 "superblock": true, 00:37:44.199 "num_base_bdevs": 4, 00:37:44.199 "num_base_bdevs_discovered": 3, 00:37:44.199 "num_base_bdevs_operational": 3, 00:37:44.199 "base_bdevs_list": [ 00:37:44.199 { 00:37:44.199 "name": null, 00:37:44.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.200 "is_configured": false, 00:37:44.200 "data_offset": 0, 00:37:44.200 "data_size": 63488 00:37:44.200 }, 00:37:44.200 { 00:37:44.200 "name": "BaseBdev2", 00:37:44.200 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:44.200 "is_configured": true, 00:37:44.200 "data_offset": 2048, 00:37:44.200 "data_size": 63488 00:37:44.200 }, 00:37:44.200 { 00:37:44.200 "name": "BaseBdev3", 00:37:44.200 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:44.200 "is_configured": true, 00:37:44.200 "data_offset": 2048, 00:37:44.200 "data_size": 63488 00:37:44.200 }, 00:37:44.200 { 00:37:44.200 "name": "BaseBdev4", 00:37:44.200 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:44.200 "is_configured": true, 00:37:44.200 "data_offset": 2048, 00:37:44.200 "data_size": 63488 00:37:44.200 } 00:37:44.200 ] 00:37:44.200 }' 00:37:44.200 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:44.200 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.459 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:44.459 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:44.459 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:44.459 [2024-10-09 14:06:50.897249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:44.459 [2024-10-09 14:06:50.900974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:37:44.459 [2024-10-09 14:06:50.903853] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:44.459 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:44.459 14:06:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.394 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.653 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:45.653 "name": "raid_bdev1", 00:37:45.653 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:45.653 "strip_size_kb": 64, 00:37:45.653 "state": "online", 00:37:45.653 "raid_level": "raid5f", 00:37:45.653 "superblock": true, 00:37:45.653 "num_base_bdevs": 4, 00:37:45.653 "num_base_bdevs_discovered": 4, 00:37:45.653 "num_base_bdevs_operational": 4, 00:37:45.653 "process": { 00:37:45.653 "type": "rebuild", 00:37:45.653 "target": "spare", 00:37:45.653 "progress": { 00:37:45.653 "blocks": 19200, 00:37:45.653 "percent": 10 00:37:45.653 } 00:37:45.653 }, 00:37:45.653 "base_bdevs_list": [ 00:37:45.653 { 00:37:45.653 "name": "spare", 00:37:45.653 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev2", 00:37:45.653 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev3", 00:37:45.653 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev4", 00:37:45.653 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 } 00:37:45.653 ] 00:37:45.653 }' 00:37:45.653 14:06:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.653 [2024-10-09 14:06:52.052724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:45.653 [2024-10-09 14:06:52.112357] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:45.653 [2024-10-09 14:06:52.112446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:45.653 [2024-10-09 14:06:52.112469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:45.653 [2024-10-09 14:06:52.112482] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:45.653 "name": "raid_bdev1", 00:37:45.653 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:45.653 "strip_size_kb": 64, 00:37:45.653 "state": "online", 00:37:45.653 "raid_level": "raid5f", 00:37:45.653 "superblock": true, 00:37:45.653 "num_base_bdevs": 4, 00:37:45.653 "num_base_bdevs_discovered": 3, 00:37:45.653 "num_base_bdevs_operational": 3, 00:37:45.653 "base_bdevs_list": [ 00:37:45.653 { 00:37:45.653 "name": null, 00:37:45.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:45.653 "is_configured": false, 00:37:45.653 "data_offset": 0, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev2", 00:37:45.653 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev3", 00:37:45.653 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 }, 00:37:45.653 { 00:37:45.653 "name": "BaseBdev4", 00:37:45.653 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:45.653 "is_configured": true, 00:37:45.653 "data_offset": 2048, 00:37:45.653 "data_size": 63488 00:37:45.653 } 00:37:45.653 ] 00:37:45.653 }' 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:45.653 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:46.220 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:46.221 "name": "raid_bdev1", 00:37:46.221 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:46.221 "strip_size_kb": 64, 00:37:46.221 "state": "online", 00:37:46.221 "raid_level": "raid5f", 00:37:46.221 "superblock": true, 00:37:46.221 "num_base_bdevs": 4, 00:37:46.221 "num_base_bdevs_discovered": 3, 00:37:46.221 "num_base_bdevs_operational": 3, 00:37:46.221 "base_bdevs_list": [ 00:37:46.221 { 00:37:46.221 "name": null, 00:37:46.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.221 "is_configured": false, 00:37:46.221 "data_offset": 0, 00:37:46.221 "data_size": 63488 00:37:46.221 }, 00:37:46.221 { 00:37:46.221 "name": "BaseBdev2", 00:37:46.221 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:46.221 "is_configured": true, 00:37:46.221 "data_offset": 2048, 00:37:46.221 "data_size": 63488 00:37:46.221 }, 00:37:46.221 { 00:37:46.221 "name": "BaseBdev3", 00:37:46.221 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:46.221 "is_configured": true, 00:37:46.221 "data_offset": 2048, 00:37:46.221 "data_size": 63488 00:37:46.221 }, 00:37:46.221 { 00:37:46.221 "name": "BaseBdev4", 00:37:46.221 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:46.221 "is_configured": true, 00:37:46.221 "data_offset": 2048, 00:37:46.221 "data_size": 63488 00:37:46.221 } 00:37:46.221 ] 00:37:46.221 }' 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:46.221 [2024-10-09 14:06:52.694480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:46.221 [2024-10-09 14:06:52.697936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:37:46.221 [2024-10-09 14:06:52.700480] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.221 14:06:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.600 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:47.601 "name": "raid_bdev1", 00:37:47.601 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:47.601 "strip_size_kb": 64, 00:37:47.601 "state": "online", 00:37:47.601 "raid_level": "raid5f", 00:37:47.601 "superblock": true, 00:37:47.601 "num_base_bdevs": 4, 00:37:47.601 "num_base_bdevs_discovered": 4, 00:37:47.601 "num_base_bdevs_operational": 4, 00:37:47.601 "process": { 00:37:47.601 "type": "rebuild", 00:37:47.601 "target": "spare", 00:37:47.601 "progress": { 00:37:47.601 "blocks": 19200, 00:37:47.601 "percent": 10 00:37:47.601 } 00:37:47.601 }, 00:37:47.601 "base_bdevs_list": [ 00:37:47.601 { 00:37:47.601 "name": "spare", 00:37:47.601 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev2", 00:37:47.601 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev3", 00:37:47.601 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev4", 00:37:47.601 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 } 00:37:47.601 ] 00:37:47.601 }' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:47.601 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=547 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:47.601 "name": "raid_bdev1", 00:37:47.601 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:47.601 "strip_size_kb": 64, 00:37:47.601 "state": "online", 00:37:47.601 "raid_level": "raid5f", 00:37:47.601 "superblock": true, 00:37:47.601 "num_base_bdevs": 4, 00:37:47.601 "num_base_bdevs_discovered": 4, 00:37:47.601 "num_base_bdevs_operational": 4, 00:37:47.601 "process": { 00:37:47.601 "type": "rebuild", 00:37:47.601 "target": "spare", 00:37:47.601 "progress": { 00:37:47.601 "blocks": 21120, 00:37:47.601 "percent": 11 00:37:47.601 } 00:37:47.601 }, 00:37:47.601 "base_bdevs_list": [ 00:37:47.601 { 00:37:47.601 "name": "spare", 00:37:47.601 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev2", 00:37:47.601 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev3", 00:37:47.601 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 }, 00:37:47.601 { 00:37:47.601 "name": "BaseBdev4", 00:37:47.601 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:47.601 "is_configured": true, 00:37:47.601 "data_offset": 2048, 00:37:47.601 "data_size": 63488 00:37:47.601 } 00:37:47.601 ] 00:37:47.601 }' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:47.601 14:06:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:48.561 14:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:48.561 14:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:48.561 14:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:48.561 14:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:48.561 14:06:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:48.561 "name": "raid_bdev1", 00:37:48.561 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:48.561 "strip_size_kb": 64, 00:37:48.561 "state": "online", 00:37:48.561 "raid_level": "raid5f", 00:37:48.561 "superblock": true, 00:37:48.561 "num_base_bdevs": 4, 00:37:48.561 "num_base_bdevs_discovered": 4, 00:37:48.561 "num_base_bdevs_operational": 4, 00:37:48.561 "process": { 00:37:48.561 "type": "rebuild", 00:37:48.561 "target": "spare", 00:37:48.561 "progress": { 00:37:48.561 "blocks": 42240, 00:37:48.561 "percent": 22 00:37:48.561 } 00:37:48.561 }, 00:37:48.561 "base_bdevs_list": [ 00:37:48.561 { 00:37:48.561 "name": "spare", 00:37:48.561 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:48.561 "is_configured": true, 00:37:48.561 "data_offset": 2048, 00:37:48.561 "data_size": 63488 00:37:48.561 }, 00:37:48.561 { 00:37:48.561 "name": "BaseBdev2", 00:37:48.561 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:48.561 "is_configured": true, 00:37:48.561 "data_offset": 2048, 00:37:48.561 "data_size": 63488 00:37:48.561 }, 00:37:48.561 { 00:37:48.561 "name": "BaseBdev3", 00:37:48.561 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:48.561 "is_configured": true, 00:37:48.561 "data_offset": 2048, 00:37:48.561 "data_size": 63488 00:37:48.561 }, 00:37:48.561 { 00:37:48.561 "name": "BaseBdev4", 00:37:48.561 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:48.561 "is_configured": true, 00:37:48.561 "data_offset": 2048, 00:37:48.561 "data_size": 63488 00:37:48.561 } 00:37:48.561 ] 00:37:48.561 }' 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:48.561 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:48.820 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:48.820 14:06:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:49.756 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:49.756 "name": "raid_bdev1", 00:37:49.756 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:49.756 "strip_size_kb": 64, 00:37:49.756 "state": "online", 00:37:49.756 "raid_level": "raid5f", 00:37:49.756 "superblock": true, 00:37:49.756 "num_base_bdevs": 4, 00:37:49.756 "num_base_bdevs_discovered": 4, 00:37:49.756 "num_base_bdevs_operational": 4, 00:37:49.756 "process": { 00:37:49.756 "type": "rebuild", 00:37:49.756 "target": "spare", 00:37:49.756 "progress": { 00:37:49.756 "blocks": 65280, 00:37:49.756 "percent": 34 00:37:49.756 } 00:37:49.756 }, 00:37:49.756 "base_bdevs_list": [ 00:37:49.757 { 00:37:49.757 "name": "spare", 00:37:49.757 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:49.757 "is_configured": true, 00:37:49.757 "data_offset": 2048, 00:37:49.757 "data_size": 63488 00:37:49.757 }, 00:37:49.757 { 00:37:49.757 "name": "BaseBdev2", 00:37:49.757 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:49.757 "is_configured": true, 00:37:49.757 "data_offset": 2048, 00:37:49.757 "data_size": 63488 00:37:49.757 }, 00:37:49.757 { 00:37:49.757 "name": "BaseBdev3", 00:37:49.757 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:49.757 "is_configured": true, 00:37:49.757 "data_offset": 2048, 00:37:49.757 "data_size": 63488 00:37:49.757 }, 00:37:49.757 { 00:37:49.757 "name": "BaseBdev4", 00:37:49.757 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:49.757 "is_configured": true, 00:37:49.757 "data_offset": 2048, 00:37:49.757 "data_size": 63488 00:37:49.757 } 00:37:49.757 ] 00:37:49.757 }' 00:37:49.757 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:49.757 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:49.757 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:49.757 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:49.757 14:06:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:51.135 "name": "raid_bdev1", 00:37:51.135 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:51.135 "strip_size_kb": 64, 00:37:51.135 "state": "online", 00:37:51.135 "raid_level": "raid5f", 00:37:51.135 "superblock": true, 00:37:51.135 "num_base_bdevs": 4, 00:37:51.135 "num_base_bdevs_discovered": 4, 00:37:51.135 "num_base_bdevs_operational": 4, 00:37:51.135 "process": { 00:37:51.135 "type": "rebuild", 00:37:51.135 "target": "spare", 00:37:51.135 "progress": { 00:37:51.135 "blocks": 86400, 00:37:51.135 "percent": 45 00:37:51.135 } 00:37:51.135 }, 00:37:51.135 "base_bdevs_list": [ 00:37:51.135 { 00:37:51.135 "name": "spare", 00:37:51.135 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:51.135 "is_configured": true, 00:37:51.135 "data_offset": 2048, 00:37:51.135 "data_size": 63488 00:37:51.135 }, 00:37:51.135 { 00:37:51.135 "name": "BaseBdev2", 00:37:51.135 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:51.135 "is_configured": true, 00:37:51.135 "data_offset": 2048, 00:37:51.135 "data_size": 63488 00:37:51.135 }, 00:37:51.135 { 00:37:51.135 "name": "BaseBdev3", 00:37:51.135 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:51.135 "is_configured": true, 00:37:51.135 "data_offset": 2048, 00:37:51.135 "data_size": 63488 00:37:51.135 }, 00:37:51.135 { 00:37:51.135 "name": "BaseBdev4", 00:37:51.135 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:51.135 "is_configured": true, 00:37:51.135 "data_offset": 2048, 00:37:51.135 "data_size": 63488 00:37:51.135 } 00:37:51.135 ] 00:37:51.135 }' 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:51.135 14:06:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:52.071 "name": "raid_bdev1", 00:37:52.071 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:52.071 "strip_size_kb": 64, 00:37:52.071 "state": "online", 00:37:52.071 "raid_level": "raid5f", 00:37:52.071 "superblock": true, 00:37:52.071 "num_base_bdevs": 4, 00:37:52.071 "num_base_bdevs_discovered": 4, 00:37:52.071 "num_base_bdevs_operational": 4, 00:37:52.071 "process": { 00:37:52.071 "type": "rebuild", 00:37:52.071 "target": "spare", 00:37:52.071 "progress": { 00:37:52.071 "blocks": 109440, 00:37:52.071 "percent": 57 00:37:52.071 } 00:37:52.071 }, 00:37:52.071 "base_bdevs_list": [ 00:37:52.071 { 00:37:52.071 "name": "spare", 00:37:52.071 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:52.071 "is_configured": true, 00:37:52.071 "data_offset": 2048, 00:37:52.071 "data_size": 63488 00:37:52.071 }, 00:37:52.071 { 00:37:52.071 "name": "BaseBdev2", 00:37:52.071 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:52.071 "is_configured": true, 00:37:52.071 "data_offset": 2048, 00:37:52.071 "data_size": 63488 00:37:52.071 }, 00:37:52.071 { 00:37:52.071 "name": "BaseBdev3", 00:37:52.071 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:52.071 "is_configured": true, 00:37:52.071 "data_offset": 2048, 00:37:52.071 "data_size": 63488 00:37:52.071 }, 00:37:52.071 { 00:37:52.071 "name": "BaseBdev4", 00:37:52.071 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:52.071 "is_configured": true, 00:37:52.071 "data_offset": 2048, 00:37:52.071 "data_size": 63488 00:37:52.071 } 00:37:52.071 ] 00:37:52.071 }' 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:52.071 14:06:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:53.448 "name": "raid_bdev1", 00:37:53.448 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:53.448 "strip_size_kb": 64, 00:37:53.448 "state": "online", 00:37:53.448 "raid_level": "raid5f", 00:37:53.448 "superblock": true, 00:37:53.448 "num_base_bdevs": 4, 00:37:53.448 "num_base_bdevs_discovered": 4, 00:37:53.448 "num_base_bdevs_operational": 4, 00:37:53.448 "process": { 00:37:53.448 "type": "rebuild", 00:37:53.448 "target": "spare", 00:37:53.448 "progress": { 00:37:53.448 "blocks": 130560, 00:37:53.448 "percent": 68 00:37:53.448 } 00:37:53.448 }, 00:37:53.448 "base_bdevs_list": [ 00:37:53.448 { 00:37:53.448 "name": "spare", 00:37:53.448 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:53.448 "is_configured": true, 00:37:53.448 "data_offset": 2048, 00:37:53.448 "data_size": 63488 00:37:53.448 }, 00:37:53.448 { 00:37:53.448 "name": "BaseBdev2", 00:37:53.448 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:53.448 "is_configured": true, 00:37:53.448 "data_offset": 2048, 00:37:53.448 "data_size": 63488 00:37:53.448 }, 00:37:53.448 { 00:37:53.448 "name": "BaseBdev3", 00:37:53.448 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:53.448 "is_configured": true, 00:37:53.448 "data_offset": 2048, 00:37:53.448 "data_size": 63488 00:37:53.448 }, 00:37:53.448 { 00:37:53.448 "name": "BaseBdev4", 00:37:53.448 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:53.448 "is_configured": true, 00:37:53.448 "data_offset": 2048, 00:37:53.448 "data_size": 63488 00:37:53.448 } 00:37:53.448 ] 00:37:53.448 }' 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:53.448 14:06:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:54.380 "name": "raid_bdev1", 00:37:54.380 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:54.380 "strip_size_kb": 64, 00:37:54.380 "state": "online", 00:37:54.380 "raid_level": "raid5f", 00:37:54.380 "superblock": true, 00:37:54.380 "num_base_bdevs": 4, 00:37:54.380 "num_base_bdevs_discovered": 4, 00:37:54.380 "num_base_bdevs_operational": 4, 00:37:54.380 "process": { 00:37:54.380 "type": "rebuild", 00:37:54.380 "target": "spare", 00:37:54.380 "progress": { 00:37:54.380 "blocks": 151680, 00:37:54.380 "percent": 79 00:37:54.380 } 00:37:54.380 }, 00:37:54.380 "base_bdevs_list": [ 00:37:54.380 { 00:37:54.380 "name": "spare", 00:37:54.380 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:54.380 "is_configured": true, 00:37:54.380 "data_offset": 2048, 00:37:54.380 "data_size": 63488 00:37:54.380 }, 00:37:54.380 { 00:37:54.380 "name": "BaseBdev2", 00:37:54.380 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:54.380 "is_configured": true, 00:37:54.380 "data_offset": 2048, 00:37:54.380 "data_size": 63488 00:37:54.380 }, 00:37:54.380 { 00:37:54.380 "name": "BaseBdev3", 00:37:54.380 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:54.380 "is_configured": true, 00:37:54.380 "data_offset": 2048, 00:37:54.380 "data_size": 63488 00:37:54.380 }, 00:37:54.380 { 00:37:54.380 "name": "BaseBdev4", 00:37:54.380 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:54.380 "is_configured": true, 00:37:54.380 "data_offset": 2048, 00:37:54.380 "data_size": 63488 00:37:54.380 } 00:37:54.380 ] 00:37:54.380 }' 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:54.380 14:07:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:55.780 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:55.780 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:55.780 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:55.780 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:55.781 "name": "raid_bdev1", 00:37:55.781 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:55.781 "strip_size_kb": 64, 00:37:55.781 "state": "online", 00:37:55.781 "raid_level": "raid5f", 00:37:55.781 "superblock": true, 00:37:55.781 "num_base_bdevs": 4, 00:37:55.781 "num_base_bdevs_discovered": 4, 00:37:55.781 "num_base_bdevs_operational": 4, 00:37:55.781 "process": { 00:37:55.781 "type": "rebuild", 00:37:55.781 "target": "spare", 00:37:55.781 "progress": { 00:37:55.781 "blocks": 174720, 00:37:55.781 "percent": 91 00:37:55.781 } 00:37:55.781 }, 00:37:55.781 "base_bdevs_list": [ 00:37:55.781 { 00:37:55.781 "name": "spare", 00:37:55.781 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:55.781 "is_configured": true, 00:37:55.781 "data_offset": 2048, 00:37:55.781 "data_size": 63488 00:37:55.781 }, 00:37:55.781 { 00:37:55.781 "name": "BaseBdev2", 00:37:55.781 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:55.781 "is_configured": true, 00:37:55.781 "data_offset": 2048, 00:37:55.781 "data_size": 63488 00:37:55.781 }, 00:37:55.781 { 00:37:55.781 "name": "BaseBdev3", 00:37:55.781 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:55.781 "is_configured": true, 00:37:55.781 "data_offset": 2048, 00:37:55.781 "data_size": 63488 00:37:55.781 }, 00:37:55.781 { 00:37:55.781 "name": "BaseBdev4", 00:37:55.781 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:55.781 "is_configured": true, 00:37:55.781 "data_offset": 2048, 00:37:55.781 "data_size": 63488 00:37:55.781 } 00:37:55.781 ] 00:37:55.781 }' 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:55.781 14:07:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:55.781 14:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:55.781 14:07:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:56.351 [2024-10-09 14:07:02.768916] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:56.351 [2024-10-09 14:07:02.768991] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:56.351 [2024-10-09 14:07:02.769102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:56.610 "name": "raid_bdev1", 00:37:56.610 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:56.610 "strip_size_kb": 64, 00:37:56.610 "state": "online", 00:37:56.610 "raid_level": "raid5f", 00:37:56.610 "superblock": true, 00:37:56.610 "num_base_bdevs": 4, 00:37:56.610 "num_base_bdevs_discovered": 4, 00:37:56.610 "num_base_bdevs_operational": 4, 00:37:56.610 "base_bdevs_list": [ 00:37:56.610 { 00:37:56.610 "name": "spare", 00:37:56.610 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:56.610 "is_configured": true, 00:37:56.610 "data_offset": 2048, 00:37:56.610 "data_size": 63488 00:37:56.610 }, 00:37:56.610 { 00:37:56.610 "name": "BaseBdev2", 00:37:56.610 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:56.610 "is_configured": true, 00:37:56.610 "data_offset": 2048, 00:37:56.610 "data_size": 63488 00:37:56.610 }, 00:37:56.610 { 00:37:56.610 "name": "BaseBdev3", 00:37:56.610 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:56.610 "is_configured": true, 00:37:56.610 "data_offset": 2048, 00:37:56.610 "data_size": 63488 00:37:56.610 }, 00:37:56.610 { 00:37:56.610 "name": "BaseBdev4", 00:37:56.610 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:56.610 "is_configured": true, 00:37:56.610 "data_offset": 2048, 00:37:56.610 "data_size": 63488 00:37:56.610 } 00:37:56.610 ] 00:37:56.610 }' 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:56.610 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:56.870 "name": "raid_bdev1", 00:37:56.870 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:56.870 "strip_size_kb": 64, 00:37:56.870 "state": "online", 00:37:56.870 "raid_level": "raid5f", 00:37:56.870 "superblock": true, 00:37:56.870 "num_base_bdevs": 4, 00:37:56.870 "num_base_bdevs_discovered": 4, 00:37:56.870 "num_base_bdevs_operational": 4, 00:37:56.870 "base_bdevs_list": [ 00:37:56.870 { 00:37:56.870 "name": "spare", 00:37:56.870 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:56.870 "is_configured": true, 00:37:56.870 "data_offset": 2048, 00:37:56.870 "data_size": 63488 00:37:56.870 }, 00:37:56.870 { 00:37:56.870 "name": "BaseBdev2", 00:37:56.870 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:56.870 "is_configured": true, 00:37:56.870 "data_offset": 2048, 00:37:56.870 "data_size": 63488 00:37:56.870 }, 00:37:56.870 { 00:37:56.870 "name": "BaseBdev3", 00:37:56.870 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:56.870 "is_configured": true, 00:37:56.870 "data_offset": 2048, 00:37:56.870 "data_size": 63488 00:37:56.870 }, 00:37:56.870 { 00:37:56.870 "name": "BaseBdev4", 00:37:56.870 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:56.870 "is_configured": true, 00:37:56.870 "data_offset": 2048, 00:37:56.870 "data_size": 63488 00:37:56.870 } 00:37:56.870 ] 00:37:56.870 }' 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:56.870 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:56.871 "name": "raid_bdev1", 00:37:56.871 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:56.871 "strip_size_kb": 64, 00:37:56.871 "state": "online", 00:37:56.871 "raid_level": "raid5f", 00:37:56.871 "superblock": true, 00:37:56.871 "num_base_bdevs": 4, 00:37:56.871 "num_base_bdevs_discovered": 4, 00:37:56.871 "num_base_bdevs_operational": 4, 00:37:56.871 "base_bdevs_list": [ 00:37:56.871 { 00:37:56.871 "name": "spare", 00:37:56.871 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:56.871 "is_configured": true, 00:37:56.871 "data_offset": 2048, 00:37:56.871 "data_size": 63488 00:37:56.871 }, 00:37:56.871 { 00:37:56.871 "name": "BaseBdev2", 00:37:56.871 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:56.871 "is_configured": true, 00:37:56.871 "data_offset": 2048, 00:37:56.871 "data_size": 63488 00:37:56.871 }, 00:37:56.871 { 00:37:56.871 "name": "BaseBdev3", 00:37:56.871 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:56.871 "is_configured": true, 00:37:56.871 "data_offset": 2048, 00:37:56.871 "data_size": 63488 00:37:56.871 }, 00:37:56.871 { 00:37:56.871 "name": "BaseBdev4", 00:37:56.871 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:56.871 "is_configured": true, 00:37:56.871 "data_offset": 2048, 00:37:56.871 "data_size": 63488 00:37:56.871 } 00:37:56.871 ] 00:37:56.871 }' 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:56.871 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:57.439 [2024-10-09 14:07:03.766357] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:57.439 [2024-10-09 14:07:03.766496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:57.439 [2024-10-09 14:07:03.766608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:57.439 [2024-10-09 14:07:03.766703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:57.439 [2024-10-09 14:07:03.766724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.439 14:07:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:57.698 /dev/nbd0 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:37:57.698 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.699 1+0 records in 00:37:57.699 1+0 records out 00:37:57.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264006 s, 15.5 MB/s 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.699 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:57.958 /dev/nbd1 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:57.958 1+0 records in 00:37:57.958 1+0 records out 00:37:57.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298762 s, 13.7 MB/s 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:57.958 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:58.218 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:58.478 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.479 [2024-10-09 14:07:04.900215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:58.479 [2024-10-09 14:07:04.900273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:58.479 [2024-10-09 14:07:04.900296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:37:58.479 [2024-10-09 14:07:04.900311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:58.479 [2024-10-09 14:07:04.902867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:58.479 [2024-10-09 14:07:04.903014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:58.479 [2024-10-09 14:07:04.903118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:58.479 [2024-10-09 14:07:04.903163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:58.479 [2024-10-09 14:07:04.903276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:58.479 [2024-10-09 14:07:04.903371] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:37:58.479 [2024-10-09 14:07:04.903437] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:37:58.479 spare 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.479 14:07:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.479 [2024-10-09 14:07:05.003526] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:37:58.479 [2024-10-09 14:07:05.003552] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:37:58.479 [2024-10-09 14:07:05.003855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:37:58.479 [2024-10-09 14:07:05.004332] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:37:58.479 [2024-10-09 14:07:05.004353] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:37:58.479 [2024-10-09 14:07:05.004510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.479 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.737 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.737 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:58.737 "name": "raid_bdev1", 00:37:58.737 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:58.737 "strip_size_kb": 64, 00:37:58.737 "state": "online", 00:37:58.737 "raid_level": "raid5f", 00:37:58.737 "superblock": true, 00:37:58.737 "num_base_bdevs": 4, 00:37:58.737 "num_base_bdevs_discovered": 4, 00:37:58.737 "num_base_bdevs_operational": 4, 00:37:58.737 "base_bdevs_list": [ 00:37:58.737 { 00:37:58.737 "name": "spare", 00:37:58.737 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:58.737 "is_configured": true, 00:37:58.737 "data_offset": 2048, 00:37:58.737 "data_size": 63488 00:37:58.737 }, 00:37:58.737 { 00:37:58.737 "name": "BaseBdev2", 00:37:58.737 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:58.737 "is_configured": true, 00:37:58.737 "data_offset": 2048, 00:37:58.737 "data_size": 63488 00:37:58.737 }, 00:37:58.737 { 00:37:58.737 "name": "BaseBdev3", 00:37:58.737 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:58.737 "is_configured": true, 00:37:58.737 "data_offset": 2048, 00:37:58.737 "data_size": 63488 00:37:58.737 }, 00:37:58.737 { 00:37:58.737 "name": "BaseBdev4", 00:37:58.737 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:58.737 "is_configured": true, 00:37:58.737 "data_offset": 2048, 00:37:58.737 "data_size": 63488 00:37:58.737 } 00:37:58.737 ] 00:37:58.737 }' 00:37:58.737 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:58.737 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:58.996 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:58.997 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:58.997 "name": "raid_bdev1", 00:37:58.997 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:58.997 "strip_size_kb": 64, 00:37:58.997 "state": "online", 00:37:58.997 "raid_level": "raid5f", 00:37:58.997 "superblock": true, 00:37:58.997 "num_base_bdevs": 4, 00:37:58.997 "num_base_bdevs_discovered": 4, 00:37:58.997 "num_base_bdevs_operational": 4, 00:37:58.997 "base_bdevs_list": [ 00:37:58.997 { 00:37:58.997 "name": "spare", 00:37:58.997 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:37:58.997 "is_configured": true, 00:37:58.997 "data_offset": 2048, 00:37:58.997 "data_size": 63488 00:37:58.997 }, 00:37:58.997 { 00:37:58.997 "name": "BaseBdev2", 00:37:58.997 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:58.997 "is_configured": true, 00:37:58.997 "data_offset": 2048, 00:37:58.997 "data_size": 63488 00:37:58.997 }, 00:37:58.997 { 00:37:58.997 "name": "BaseBdev3", 00:37:58.997 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:58.997 "is_configured": true, 00:37:58.997 "data_offset": 2048, 00:37:58.997 "data_size": 63488 00:37:58.997 }, 00:37:58.997 { 00:37:58.997 "name": "BaseBdev4", 00:37:58.997 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:58.997 "is_configured": true, 00:37:58.997 "data_offset": 2048, 00:37:58.997 "data_size": 63488 00:37:58.997 } 00:37:58.997 ] 00:37:58.997 }' 00:37:58.997 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:58.997 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:58.997 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.256 [2024-10-09 14:07:05.624658] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:59.256 "name": "raid_bdev1", 00:37:59.256 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:37:59.256 "strip_size_kb": 64, 00:37:59.256 "state": "online", 00:37:59.256 "raid_level": "raid5f", 00:37:59.256 "superblock": true, 00:37:59.256 "num_base_bdevs": 4, 00:37:59.256 "num_base_bdevs_discovered": 3, 00:37:59.256 "num_base_bdevs_operational": 3, 00:37:59.256 "base_bdevs_list": [ 00:37:59.256 { 00:37:59.256 "name": null, 00:37:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:59.256 "is_configured": false, 00:37:59.256 "data_offset": 0, 00:37:59.256 "data_size": 63488 00:37:59.256 }, 00:37:59.256 { 00:37:59.256 "name": "BaseBdev2", 00:37:59.256 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:37:59.256 "is_configured": true, 00:37:59.256 "data_offset": 2048, 00:37:59.256 "data_size": 63488 00:37:59.256 }, 00:37:59.256 { 00:37:59.256 "name": "BaseBdev3", 00:37:59.256 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:37:59.256 "is_configured": true, 00:37:59.256 "data_offset": 2048, 00:37:59.256 "data_size": 63488 00:37:59.256 }, 00:37:59.256 { 00:37:59.256 "name": "BaseBdev4", 00:37:59.256 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:37:59.256 "is_configured": true, 00:37:59.256 "data_offset": 2048, 00:37:59.256 "data_size": 63488 00:37:59.256 } 00:37:59.256 ] 00:37:59.256 }' 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:59.256 14:07:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.517 14:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:59.517 14:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:59.517 14:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:37:59.517 [2024-10-09 14:07:06.044771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:59.517 [2024-10-09 14:07:06.045106] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:59.517 [2024-10-09 14:07:06.045248] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:59.517 [2024-10-09 14:07:06.045366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:59.517 [2024-10-09 14:07:06.048637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:37:59.517 [2024-10-09 14:07:06.051281] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:59.517 14:07:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:59.517 14:07:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:00.897 "name": "raid_bdev1", 00:38:00.897 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:00.897 "strip_size_kb": 64, 00:38:00.897 "state": "online", 00:38:00.897 "raid_level": "raid5f", 00:38:00.897 "superblock": true, 00:38:00.897 "num_base_bdevs": 4, 00:38:00.897 "num_base_bdevs_discovered": 4, 00:38:00.897 "num_base_bdevs_operational": 4, 00:38:00.897 "process": { 00:38:00.897 "type": "rebuild", 00:38:00.897 "target": "spare", 00:38:00.897 "progress": { 00:38:00.897 "blocks": 19200, 00:38:00.897 "percent": 10 00:38:00.897 } 00:38:00.897 }, 00:38:00.897 "base_bdevs_list": [ 00:38:00.897 { 00:38:00.897 "name": "spare", 00:38:00.897 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev2", 00:38:00.897 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev3", 00:38:00.897 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev4", 00:38:00.897 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 } 00:38:00.897 ] 00:38:00.897 }' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.897 [2024-10-09 14:07:07.204295] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.897 [2024-10-09 14:07:07.259214] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:00.897 [2024-10-09 14:07:07.259273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.897 [2024-10-09 14:07:07.259293] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.897 [2024-10-09 14:07:07.259302] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:00.897 "name": "raid_bdev1", 00:38:00.897 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:00.897 "strip_size_kb": 64, 00:38:00.897 "state": "online", 00:38:00.897 "raid_level": "raid5f", 00:38:00.897 "superblock": true, 00:38:00.897 "num_base_bdevs": 4, 00:38:00.897 "num_base_bdevs_discovered": 3, 00:38:00.897 "num_base_bdevs_operational": 3, 00:38:00.897 "base_bdevs_list": [ 00:38:00.897 { 00:38:00.897 "name": null, 00:38:00.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:00.897 "is_configured": false, 00:38:00.897 "data_offset": 0, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev2", 00:38:00.897 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev3", 00:38:00.897 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 }, 00:38:00.897 { 00:38:00.897 "name": "BaseBdev4", 00:38:00.897 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:00.897 "is_configured": true, 00:38:00.897 "data_offset": 2048, 00:38:00.897 "data_size": 63488 00:38:00.897 } 00:38:00.897 ] 00:38:00.897 }' 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:00.897 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:01.466 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:01.466 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.466 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:01.466 [2024-10-09 14:07:07.720258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:01.466 [2024-10-09 14:07:07.720428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:01.466 [2024-10-09 14:07:07.720470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:38:01.466 [2024-10-09 14:07:07.720483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:01.466 [2024-10-09 14:07:07.720950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:01.466 [2024-10-09 14:07:07.720975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:01.466 [2024-10-09 14:07:07.721067] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:01.466 [2024-10-09 14:07:07.721080] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:01.466 [2024-10-09 14:07:07.721097] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:01.466 [2024-10-09 14:07:07.721123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:01.466 [2024-10-09 14:07:07.724382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:38:01.466 spare 00:38:01.466 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.466 14:07:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:01.466 [2024-10-09 14:07:07.726936] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:02.403 "name": "raid_bdev1", 00:38:02.403 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:02.403 "strip_size_kb": 64, 00:38:02.403 "state": "online", 00:38:02.403 "raid_level": "raid5f", 00:38:02.403 "superblock": true, 00:38:02.403 "num_base_bdevs": 4, 00:38:02.403 "num_base_bdevs_discovered": 4, 00:38:02.403 "num_base_bdevs_operational": 4, 00:38:02.403 "process": { 00:38:02.403 "type": "rebuild", 00:38:02.403 "target": "spare", 00:38:02.403 "progress": { 00:38:02.403 "blocks": 19200, 00:38:02.403 "percent": 10 00:38:02.403 } 00:38:02.403 }, 00:38:02.403 "base_bdevs_list": [ 00:38:02.403 { 00:38:02.403 "name": "spare", 00:38:02.403 "uuid": "137c5731-fc5f-5442-9156-a870af51f8fc", 00:38:02.403 "is_configured": true, 00:38:02.403 "data_offset": 2048, 00:38:02.403 "data_size": 63488 00:38:02.403 }, 00:38:02.403 { 00:38:02.403 "name": "BaseBdev2", 00:38:02.403 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:02.403 "is_configured": true, 00:38:02.403 "data_offset": 2048, 00:38:02.403 "data_size": 63488 00:38:02.403 }, 00:38:02.403 { 00:38:02.403 "name": "BaseBdev3", 00:38:02.403 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:02.403 "is_configured": true, 00:38:02.403 "data_offset": 2048, 00:38:02.403 "data_size": 63488 00:38:02.403 }, 00:38:02.403 { 00:38:02.403 "name": "BaseBdev4", 00:38:02.403 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:02.403 "is_configured": true, 00:38:02.403 "data_offset": 2048, 00:38:02.403 "data_size": 63488 00:38:02.403 } 00:38:02.403 ] 00:38:02.403 }' 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.403 [2024-10-09 14:07:08.881237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:02.403 [2024-10-09 14:07:08.934958] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:02.403 [2024-10-09 14:07:08.935168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:02.403 [2024-10-09 14:07:08.935262] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:02.403 [2024-10-09 14:07:08.935306] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:02.403 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:02.662 "name": "raid_bdev1", 00:38:02.662 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:02.662 "strip_size_kb": 64, 00:38:02.662 "state": "online", 00:38:02.662 "raid_level": "raid5f", 00:38:02.662 "superblock": true, 00:38:02.662 "num_base_bdevs": 4, 00:38:02.662 "num_base_bdevs_discovered": 3, 00:38:02.662 "num_base_bdevs_operational": 3, 00:38:02.662 "base_bdevs_list": [ 00:38:02.662 { 00:38:02.662 "name": null, 00:38:02.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:02.662 "is_configured": false, 00:38:02.662 "data_offset": 0, 00:38:02.662 "data_size": 63488 00:38:02.662 }, 00:38:02.662 { 00:38:02.662 "name": "BaseBdev2", 00:38:02.662 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:02.662 "is_configured": true, 00:38:02.662 "data_offset": 2048, 00:38:02.662 "data_size": 63488 00:38:02.662 }, 00:38:02.662 { 00:38:02.662 "name": "BaseBdev3", 00:38:02.662 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:02.662 "is_configured": true, 00:38:02.662 "data_offset": 2048, 00:38:02.662 "data_size": 63488 00:38:02.662 }, 00:38:02.662 { 00:38:02.662 "name": "BaseBdev4", 00:38:02.662 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:02.662 "is_configured": true, 00:38:02.662 "data_offset": 2048, 00:38:02.662 "data_size": 63488 00:38:02.662 } 00:38:02.662 ] 00:38:02.662 }' 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:02.662 14:07:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:02.922 "name": "raid_bdev1", 00:38:02.922 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:02.922 "strip_size_kb": 64, 00:38:02.922 "state": "online", 00:38:02.922 "raid_level": "raid5f", 00:38:02.922 "superblock": true, 00:38:02.922 "num_base_bdevs": 4, 00:38:02.922 "num_base_bdevs_discovered": 3, 00:38:02.922 "num_base_bdevs_operational": 3, 00:38:02.922 "base_bdevs_list": [ 00:38:02.922 { 00:38:02.922 "name": null, 00:38:02.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:02.922 "is_configured": false, 00:38:02.922 "data_offset": 0, 00:38:02.922 "data_size": 63488 00:38:02.922 }, 00:38:02.922 { 00:38:02.922 "name": "BaseBdev2", 00:38:02.922 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:02.922 "is_configured": true, 00:38:02.922 "data_offset": 2048, 00:38:02.922 "data_size": 63488 00:38:02.922 }, 00:38:02.922 { 00:38:02.922 "name": "BaseBdev3", 00:38:02.922 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:02.922 "is_configured": true, 00:38:02.922 "data_offset": 2048, 00:38:02.922 "data_size": 63488 00:38:02.922 }, 00:38:02.922 { 00:38:02.922 "name": "BaseBdev4", 00:38:02.922 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:02.922 "is_configured": true, 00:38:02.922 "data_offset": 2048, 00:38:02.922 "data_size": 63488 00:38:02.922 } 00:38:02.922 ] 00:38:02.922 }' 00:38:02.922 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:03.181 [2024-10-09 14:07:09.536349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:03.181 [2024-10-09 14:07:09.536419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:03.181 [2024-10-09 14:07:09.536444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:38:03.181 [2024-10-09 14:07:09.536459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:03.181 [2024-10-09 14:07:09.536929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:03.181 [2024-10-09 14:07:09.536960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:03.181 [2024-10-09 14:07:09.537040] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:03.181 [2024-10-09 14:07:09.537061] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:03.181 [2024-10-09 14:07:09.537071] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:03.181 [2024-10-09 14:07:09.537094] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:03.181 BaseBdev1 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:03.181 14:07:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:04.145 "name": "raid_bdev1", 00:38:04.145 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:04.145 "strip_size_kb": 64, 00:38:04.145 "state": "online", 00:38:04.145 "raid_level": "raid5f", 00:38:04.145 "superblock": true, 00:38:04.145 "num_base_bdevs": 4, 00:38:04.145 "num_base_bdevs_discovered": 3, 00:38:04.145 "num_base_bdevs_operational": 3, 00:38:04.145 "base_bdevs_list": [ 00:38:04.145 { 00:38:04.145 "name": null, 00:38:04.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.145 "is_configured": false, 00:38:04.145 "data_offset": 0, 00:38:04.145 "data_size": 63488 00:38:04.145 }, 00:38:04.145 { 00:38:04.145 "name": "BaseBdev2", 00:38:04.145 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:04.145 "is_configured": true, 00:38:04.145 "data_offset": 2048, 00:38:04.145 "data_size": 63488 00:38:04.145 }, 00:38:04.145 { 00:38:04.145 "name": "BaseBdev3", 00:38:04.145 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:04.145 "is_configured": true, 00:38:04.145 "data_offset": 2048, 00:38:04.145 "data_size": 63488 00:38:04.145 }, 00:38:04.145 { 00:38:04.145 "name": "BaseBdev4", 00:38:04.145 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:04.145 "is_configured": true, 00:38:04.145 "data_offset": 2048, 00:38:04.145 "data_size": 63488 00:38:04.145 } 00:38:04.145 ] 00:38:04.145 }' 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:04.145 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.713 14:07:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:04.713 "name": "raid_bdev1", 00:38:04.713 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:04.713 "strip_size_kb": 64, 00:38:04.713 "state": "online", 00:38:04.713 "raid_level": "raid5f", 00:38:04.713 "superblock": true, 00:38:04.713 "num_base_bdevs": 4, 00:38:04.713 "num_base_bdevs_discovered": 3, 00:38:04.713 "num_base_bdevs_operational": 3, 00:38:04.713 "base_bdevs_list": [ 00:38:04.713 { 00:38:04.713 "name": null, 00:38:04.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.713 "is_configured": false, 00:38:04.713 "data_offset": 0, 00:38:04.713 "data_size": 63488 00:38:04.713 }, 00:38:04.713 { 00:38:04.713 "name": "BaseBdev2", 00:38:04.713 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:04.713 "is_configured": true, 00:38:04.713 "data_offset": 2048, 00:38:04.713 "data_size": 63488 00:38:04.713 }, 00:38:04.713 { 00:38:04.713 "name": "BaseBdev3", 00:38:04.713 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:04.713 "is_configured": true, 00:38:04.713 "data_offset": 2048, 00:38:04.713 "data_size": 63488 00:38:04.713 }, 00:38:04.713 { 00:38:04.713 "name": "BaseBdev4", 00:38:04.713 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:04.713 "is_configured": true, 00:38:04.713 "data_offset": 2048, 00:38:04.713 "data_size": 63488 00:38:04.713 } 00:38:04.713 ] 00:38:04.713 }' 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:04.713 [2024-10-09 14:07:11.140682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:04.713 [2024-10-09 14:07:11.140834] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:04.713 [2024-10-09 14:07:11.140852] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:04.713 request: 00:38:04.713 { 00:38:04.713 "base_bdev": "BaseBdev1", 00:38:04.713 "raid_bdev": "raid_bdev1", 00:38:04.713 "method": "bdev_raid_add_base_bdev", 00:38:04.713 "req_id": 1 00:38:04.713 } 00:38:04.713 Got JSON-RPC error response 00:38:04.713 response: 00:38:04.713 { 00:38:04.713 "code": -22, 00:38:04.713 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:04.713 } 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:04.713 14:07:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:05.649 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.908 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:05.908 "name": "raid_bdev1", 00:38:05.908 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:05.908 "strip_size_kb": 64, 00:38:05.908 "state": "online", 00:38:05.908 "raid_level": "raid5f", 00:38:05.908 "superblock": true, 00:38:05.908 "num_base_bdevs": 4, 00:38:05.908 "num_base_bdevs_discovered": 3, 00:38:05.908 "num_base_bdevs_operational": 3, 00:38:05.908 "base_bdevs_list": [ 00:38:05.908 { 00:38:05.908 "name": null, 00:38:05.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:05.908 "is_configured": false, 00:38:05.908 "data_offset": 0, 00:38:05.908 "data_size": 63488 00:38:05.908 }, 00:38:05.908 { 00:38:05.908 "name": "BaseBdev2", 00:38:05.908 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:05.908 "is_configured": true, 00:38:05.908 "data_offset": 2048, 00:38:05.908 "data_size": 63488 00:38:05.908 }, 00:38:05.908 { 00:38:05.908 "name": "BaseBdev3", 00:38:05.908 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:05.908 "is_configured": true, 00:38:05.908 "data_offset": 2048, 00:38:05.908 "data_size": 63488 00:38:05.908 }, 00:38:05.908 { 00:38:05.908 "name": "BaseBdev4", 00:38:05.908 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:05.908 "is_configured": true, 00:38:05.908 "data_offset": 2048, 00:38:05.908 "data_size": 63488 00:38:05.908 } 00:38:05.908 ] 00:38:05.908 }' 00:38:05.908 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:05.908 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:06.167 "name": "raid_bdev1", 00:38:06.167 "uuid": "b0fdd074-cdbc-449c-9b63-1bbebfeee3a4", 00:38:06.167 "strip_size_kb": 64, 00:38:06.167 "state": "online", 00:38:06.167 "raid_level": "raid5f", 00:38:06.167 "superblock": true, 00:38:06.167 "num_base_bdevs": 4, 00:38:06.167 "num_base_bdevs_discovered": 3, 00:38:06.167 "num_base_bdevs_operational": 3, 00:38:06.167 "base_bdevs_list": [ 00:38:06.167 { 00:38:06.167 "name": null, 00:38:06.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.167 "is_configured": false, 00:38:06.167 "data_offset": 0, 00:38:06.167 "data_size": 63488 00:38:06.167 }, 00:38:06.167 { 00:38:06.167 "name": "BaseBdev2", 00:38:06.167 "uuid": "2c2571ff-face-5209-942b-62c7d38cd4d6", 00:38:06.167 "is_configured": true, 00:38:06.167 "data_offset": 2048, 00:38:06.167 "data_size": 63488 00:38:06.167 }, 00:38:06.167 { 00:38:06.167 "name": "BaseBdev3", 00:38:06.167 "uuid": "9ce332b9-fe79-5692-a462-f4934ea07fb0", 00:38:06.167 "is_configured": true, 00:38:06.167 "data_offset": 2048, 00:38:06.167 "data_size": 63488 00:38:06.167 }, 00:38:06.167 { 00:38:06.167 "name": "BaseBdev4", 00:38:06.167 "uuid": "410eb2f4-197a-5f36-a31c-cf5545404861", 00:38:06.167 "is_configured": true, 00:38:06.167 "data_offset": 2048, 00:38:06.167 "data_size": 63488 00:38:06.167 } 00:38:06.167 ] 00:38:06.167 }' 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:06.167 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95951 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95951 ']' 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95951 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:06.426 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95951 00:38:06.426 killing process with pid 95951 00:38:06.426 Received shutdown signal, test time was about 60.000000 seconds 00:38:06.426 00:38:06.426 Latency(us) 00:38:06.426 [2024-10-09T14:07:12.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:06.426 [2024-10-09T14:07:12.977Z] =================================================================================================================== 00:38:06.427 [2024-10-09T14:07:12.978Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:06.427 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:06.427 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:06.427 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95951' 00:38:06.427 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95951 00:38:06.427 [2024-10-09 14:07:12.794488] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:06.427 [2024-10-09 14:07:12.794611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:06.427 14:07:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95951 00:38:06.427 [2024-10-09 14:07:12.794688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:06.427 [2024-10-09 14:07:12.794699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:38:06.427 [2024-10-09 14:07:12.845262] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:06.685 14:07:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:38:06.685 00:38:06.685 real 0m25.378s 00:38:06.685 user 0m32.266s 00:38:06.685 sys 0m3.260s 00:38:06.685 14:07:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:06.685 14:07:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 ************************************ 00:38:06.685 END TEST raid5f_rebuild_test_sb 00:38:06.685 ************************************ 00:38:06.685 14:07:13 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:38:06.685 14:07:13 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:38:06.685 14:07:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:38:06.685 14:07:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:06.685 14:07:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:06.685 ************************************ 00:38:06.685 START TEST raid_state_function_test_sb_4k 00:38:06.685 ************************************ 00:38:06.685 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:38:06.685 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96744 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96744' 00:38:06.686 Process raid pid: 96744 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96744 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96744 ']' 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.686 14:07:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:06.944 [2024-10-09 14:07:13.266667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:06.945 [2024-10-09 14:07:13.267071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:06.945 [2024-10-09 14:07:13.449959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.203 [2024-10-09 14:07:13.505364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.203 [2024-10-09 14:07:13.556679] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:07.203 [2024-10-09 14:07:13.556739] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:07.772 [2024-10-09 14:07:14.130413] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:07.772 [2024-10-09 14:07:14.130467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:07.772 [2024-10-09 14:07:14.130481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:07.772 [2024-10-09 14:07:14.130494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:07.772 "name": "Existed_Raid", 00:38:07.772 "uuid": "14341a88-4eee-4744-89c5-80e487452a1d", 00:38:07.772 "strip_size_kb": 0, 00:38:07.772 "state": "configuring", 00:38:07.772 "raid_level": "raid1", 00:38:07.772 "superblock": true, 00:38:07.772 "num_base_bdevs": 2, 00:38:07.772 "num_base_bdevs_discovered": 0, 00:38:07.772 "num_base_bdevs_operational": 2, 00:38:07.772 "base_bdevs_list": [ 00:38:07.772 { 00:38:07.772 "name": "BaseBdev1", 00:38:07.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.772 "is_configured": false, 00:38:07.772 "data_offset": 0, 00:38:07.772 "data_size": 0 00:38:07.772 }, 00:38:07.772 { 00:38:07.772 "name": "BaseBdev2", 00:38:07.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.772 "is_configured": false, 00:38:07.772 "data_offset": 0, 00:38:07.772 "data_size": 0 00:38:07.772 } 00:38:07.772 ] 00:38:07.772 }' 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:07.772 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.032 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:08.032 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.032 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.291 [2024-10-09 14:07:14.582422] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:08.291 [2024-10-09 14:07:14.582619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.291 [2024-10-09 14:07:14.590477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:08.291 [2024-10-09 14:07:14.590653] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:08.291 [2024-10-09 14:07:14.590674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:08.291 [2024-10-09 14:07:14.590700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.291 [2024-10-09 14:07:14.607840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:08.291 BaseBdev1 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.291 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.292 [ 00:38:08.292 { 00:38:08.292 "name": "BaseBdev1", 00:38:08.292 "aliases": [ 00:38:08.292 "fe139263-46d5-419a-8422-2a7df877a735" 00:38:08.292 ], 00:38:08.292 "product_name": "Malloc disk", 00:38:08.292 "block_size": 4096, 00:38:08.292 "num_blocks": 8192, 00:38:08.292 "uuid": "fe139263-46d5-419a-8422-2a7df877a735", 00:38:08.292 "assigned_rate_limits": { 00:38:08.292 "rw_ios_per_sec": 0, 00:38:08.292 "rw_mbytes_per_sec": 0, 00:38:08.292 "r_mbytes_per_sec": 0, 00:38:08.292 "w_mbytes_per_sec": 0 00:38:08.292 }, 00:38:08.292 "claimed": true, 00:38:08.292 "claim_type": "exclusive_write", 00:38:08.292 "zoned": false, 00:38:08.292 "supported_io_types": { 00:38:08.292 "read": true, 00:38:08.292 "write": true, 00:38:08.292 "unmap": true, 00:38:08.292 "flush": true, 00:38:08.292 "reset": true, 00:38:08.292 "nvme_admin": false, 00:38:08.292 "nvme_io": false, 00:38:08.292 "nvme_io_md": false, 00:38:08.292 "write_zeroes": true, 00:38:08.292 "zcopy": true, 00:38:08.292 "get_zone_info": false, 00:38:08.292 "zone_management": false, 00:38:08.292 "zone_append": false, 00:38:08.292 "compare": false, 00:38:08.292 "compare_and_write": false, 00:38:08.292 "abort": true, 00:38:08.292 "seek_hole": false, 00:38:08.292 "seek_data": false, 00:38:08.292 "copy": true, 00:38:08.292 "nvme_iov_md": false 00:38:08.292 }, 00:38:08.292 "memory_domains": [ 00:38:08.292 { 00:38:08.292 "dma_device_id": "system", 00:38:08.292 "dma_device_type": 1 00:38:08.292 }, 00:38:08.292 { 00:38:08.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:08.292 "dma_device_type": 2 00:38:08.292 } 00:38:08.292 ], 00:38:08.292 "driver_specific": {} 00:38:08.292 } 00:38:08.292 ] 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:08.292 "name": "Existed_Raid", 00:38:08.292 "uuid": "06ba5dda-25d5-4316-9185-43b25b21bdfb", 00:38:08.292 "strip_size_kb": 0, 00:38:08.292 "state": "configuring", 00:38:08.292 "raid_level": "raid1", 00:38:08.292 "superblock": true, 00:38:08.292 "num_base_bdevs": 2, 00:38:08.292 "num_base_bdevs_discovered": 1, 00:38:08.292 "num_base_bdevs_operational": 2, 00:38:08.292 "base_bdevs_list": [ 00:38:08.292 { 00:38:08.292 "name": "BaseBdev1", 00:38:08.292 "uuid": "fe139263-46d5-419a-8422-2a7df877a735", 00:38:08.292 "is_configured": true, 00:38:08.292 "data_offset": 256, 00:38:08.292 "data_size": 7936 00:38:08.292 }, 00:38:08.292 { 00:38:08.292 "name": "BaseBdev2", 00:38:08.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:08.292 "is_configured": false, 00:38:08.292 "data_offset": 0, 00:38:08.292 "data_size": 0 00:38:08.292 } 00:38:08.292 ] 00:38:08.292 }' 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:08.292 14:07:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.551 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:08.551 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.551 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.551 [2024-10-09 14:07:15.095973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:08.551 [2024-10-09 14:07:15.096024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:38:08.551 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.810 [2024-10-09 14:07:15.108018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:08.810 [2024-10-09 14:07:15.110333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:08.810 [2024-10-09 14:07:15.110482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:08.810 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:08.811 "name": "Existed_Raid", 00:38:08.811 "uuid": "57c5f5a2-6bdb-44c5-9278-7fc78807b014", 00:38:08.811 "strip_size_kb": 0, 00:38:08.811 "state": "configuring", 00:38:08.811 "raid_level": "raid1", 00:38:08.811 "superblock": true, 00:38:08.811 "num_base_bdevs": 2, 00:38:08.811 "num_base_bdevs_discovered": 1, 00:38:08.811 "num_base_bdevs_operational": 2, 00:38:08.811 "base_bdevs_list": [ 00:38:08.811 { 00:38:08.811 "name": "BaseBdev1", 00:38:08.811 "uuid": "fe139263-46d5-419a-8422-2a7df877a735", 00:38:08.811 "is_configured": true, 00:38:08.811 "data_offset": 256, 00:38:08.811 "data_size": 7936 00:38:08.811 }, 00:38:08.811 { 00:38:08.811 "name": "BaseBdev2", 00:38:08.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:08.811 "is_configured": false, 00:38:08.811 "data_offset": 0, 00:38:08.811 "data_size": 0 00:38:08.811 } 00:38:08.811 ] 00:38:08.811 }' 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:08.811 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.070 [2024-10-09 14:07:15.593175] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:09.070 [2024-10-09 14:07:15.593397] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:38:09.070 [2024-10-09 14:07:15.593417] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:09.070 [2024-10-09 14:07:15.593829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:09.070 BaseBdev2 00:38:09.070 [2024-10-09 14:07:15.593992] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:38:09.070 [2024-10-09 14:07:15.594026] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:38:09.070 [2024-10-09 14:07:15.594154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.070 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.070 [ 00:38:09.070 { 00:38:09.070 "name": "BaseBdev2", 00:38:09.070 "aliases": [ 00:38:09.070 "2731b0f1-922b-48bd-b592-e82b597dcd03" 00:38:09.070 ], 00:38:09.070 "product_name": "Malloc disk", 00:38:09.070 "block_size": 4096, 00:38:09.070 "num_blocks": 8192, 00:38:09.070 "uuid": "2731b0f1-922b-48bd-b592-e82b597dcd03", 00:38:09.070 "assigned_rate_limits": { 00:38:09.070 "rw_ios_per_sec": 0, 00:38:09.070 "rw_mbytes_per_sec": 0, 00:38:09.070 "r_mbytes_per_sec": 0, 00:38:09.070 "w_mbytes_per_sec": 0 00:38:09.070 }, 00:38:09.070 "claimed": true, 00:38:09.330 "claim_type": "exclusive_write", 00:38:09.330 "zoned": false, 00:38:09.330 "supported_io_types": { 00:38:09.330 "read": true, 00:38:09.330 "write": true, 00:38:09.330 "unmap": true, 00:38:09.330 "flush": true, 00:38:09.330 "reset": true, 00:38:09.330 "nvme_admin": false, 00:38:09.330 "nvme_io": false, 00:38:09.330 "nvme_io_md": false, 00:38:09.330 "write_zeroes": true, 00:38:09.330 "zcopy": true, 00:38:09.330 "get_zone_info": false, 00:38:09.330 "zone_management": false, 00:38:09.330 "zone_append": false, 00:38:09.330 "compare": false, 00:38:09.330 "compare_and_write": false, 00:38:09.330 "abort": true, 00:38:09.330 "seek_hole": false, 00:38:09.330 "seek_data": false, 00:38:09.330 "copy": true, 00:38:09.330 "nvme_iov_md": false 00:38:09.330 }, 00:38:09.330 "memory_domains": [ 00:38:09.330 { 00:38:09.330 "dma_device_id": "system", 00:38:09.330 "dma_device_type": 1 00:38:09.330 }, 00:38:09.330 { 00:38:09.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.330 "dma_device_type": 2 00:38:09.330 } 00:38:09.330 ], 00:38:09.330 "driver_specific": {} 00:38:09.330 } 00:38:09.330 ] 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:09.330 "name": "Existed_Raid", 00:38:09.330 "uuid": "57c5f5a2-6bdb-44c5-9278-7fc78807b014", 00:38:09.330 "strip_size_kb": 0, 00:38:09.330 "state": "online", 00:38:09.330 "raid_level": "raid1", 00:38:09.330 "superblock": true, 00:38:09.330 "num_base_bdevs": 2, 00:38:09.330 "num_base_bdevs_discovered": 2, 00:38:09.330 "num_base_bdevs_operational": 2, 00:38:09.330 "base_bdevs_list": [ 00:38:09.330 { 00:38:09.330 "name": "BaseBdev1", 00:38:09.330 "uuid": "fe139263-46d5-419a-8422-2a7df877a735", 00:38:09.330 "is_configured": true, 00:38:09.330 "data_offset": 256, 00:38:09.330 "data_size": 7936 00:38:09.330 }, 00:38:09.330 { 00:38:09.330 "name": "BaseBdev2", 00:38:09.330 "uuid": "2731b0f1-922b-48bd-b592-e82b597dcd03", 00:38:09.330 "is_configured": true, 00:38:09.330 "data_offset": 256, 00:38:09.330 "data_size": 7936 00:38:09.330 } 00:38:09.330 ] 00:38:09.330 }' 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:09.330 14:07:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.588 [2024-10-09 14:07:16.089643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.588 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:09.588 "name": "Existed_Raid", 00:38:09.588 "aliases": [ 00:38:09.588 "57c5f5a2-6bdb-44c5-9278-7fc78807b014" 00:38:09.588 ], 00:38:09.588 "product_name": "Raid Volume", 00:38:09.588 "block_size": 4096, 00:38:09.588 "num_blocks": 7936, 00:38:09.588 "uuid": "57c5f5a2-6bdb-44c5-9278-7fc78807b014", 00:38:09.588 "assigned_rate_limits": { 00:38:09.588 "rw_ios_per_sec": 0, 00:38:09.589 "rw_mbytes_per_sec": 0, 00:38:09.589 "r_mbytes_per_sec": 0, 00:38:09.589 "w_mbytes_per_sec": 0 00:38:09.589 }, 00:38:09.589 "claimed": false, 00:38:09.589 "zoned": false, 00:38:09.589 "supported_io_types": { 00:38:09.589 "read": true, 00:38:09.589 "write": true, 00:38:09.589 "unmap": false, 00:38:09.589 "flush": false, 00:38:09.589 "reset": true, 00:38:09.589 "nvme_admin": false, 00:38:09.589 "nvme_io": false, 00:38:09.589 "nvme_io_md": false, 00:38:09.589 "write_zeroes": true, 00:38:09.589 "zcopy": false, 00:38:09.589 "get_zone_info": false, 00:38:09.589 "zone_management": false, 00:38:09.589 "zone_append": false, 00:38:09.589 "compare": false, 00:38:09.589 "compare_and_write": false, 00:38:09.589 "abort": false, 00:38:09.589 "seek_hole": false, 00:38:09.589 "seek_data": false, 00:38:09.589 "copy": false, 00:38:09.589 "nvme_iov_md": false 00:38:09.589 }, 00:38:09.589 "memory_domains": [ 00:38:09.589 { 00:38:09.589 "dma_device_id": "system", 00:38:09.589 "dma_device_type": 1 00:38:09.589 }, 00:38:09.589 { 00:38:09.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.589 "dma_device_type": 2 00:38:09.589 }, 00:38:09.589 { 00:38:09.589 "dma_device_id": "system", 00:38:09.589 "dma_device_type": 1 00:38:09.589 }, 00:38:09.589 { 00:38:09.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:09.589 "dma_device_type": 2 00:38:09.589 } 00:38:09.589 ], 00:38:09.589 "driver_specific": { 00:38:09.589 "raid": { 00:38:09.589 "uuid": "57c5f5a2-6bdb-44c5-9278-7fc78807b014", 00:38:09.589 "strip_size_kb": 0, 00:38:09.589 "state": "online", 00:38:09.589 "raid_level": "raid1", 00:38:09.589 "superblock": true, 00:38:09.589 "num_base_bdevs": 2, 00:38:09.589 "num_base_bdevs_discovered": 2, 00:38:09.589 "num_base_bdevs_operational": 2, 00:38:09.589 "base_bdevs_list": [ 00:38:09.589 { 00:38:09.589 "name": "BaseBdev1", 00:38:09.589 "uuid": "fe139263-46d5-419a-8422-2a7df877a735", 00:38:09.589 "is_configured": true, 00:38:09.589 "data_offset": 256, 00:38:09.589 "data_size": 7936 00:38:09.589 }, 00:38:09.589 { 00:38:09.589 "name": "BaseBdev2", 00:38:09.589 "uuid": "2731b0f1-922b-48bd-b592-e82b597dcd03", 00:38:09.589 "is_configured": true, 00:38:09.589 "data_offset": 256, 00:38:09.589 "data_size": 7936 00:38:09.589 } 00:38:09.589 ] 00:38:09.589 } 00:38:09.589 } 00:38:09.589 }' 00:38:09.589 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:09.848 BaseBdev2' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.848 [2024-10-09 14:07:16.313420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:09.848 "name": "Existed_Raid", 00:38:09.848 "uuid": "57c5f5a2-6bdb-44c5-9278-7fc78807b014", 00:38:09.848 "strip_size_kb": 0, 00:38:09.848 "state": "online", 00:38:09.848 "raid_level": "raid1", 00:38:09.848 "superblock": true, 00:38:09.848 "num_base_bdevs": 2, 00:38:09.848 "num_base_bdevs_discovered": 1, 00:38:09.848 "num_base_bdevs_operational": 1, 00:38:09.848 "base_bdevs_list": [ 00:38:09.848 { 00:38:09.848 "name": null, 00:38:09.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:09.848 "is_configured": false, 00:38:09.848 "data_offset": 0, 00:38:09.848 "data_size": 7936 00:38:09.848 }, 00:38:09.848 { 00:38:09.848 "name": "BaseBdev2", 00:38:09.848 "uuid": "2731b0f1-922b-48bd-b592-e82b597dcd03", 00:38:09.848 "is_configured": true, 00:38:09.848 "data_offset": 256, 00:38:09.848 "data_size": 7936 00:38:09.848 } 00:38:09.848 ] 00:38:09.848 }' 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:09.848 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.416 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:10.416 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:10.416 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.417 [2024-10-09 14:07:16.845878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:10.417 [2024-10-09 14:07:16.846169] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:10.417 [2024-10-09 14:07:16.858809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:10.417 [2024-10-09 14:07:16.858859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:10.417 [2024-10-09 14:07:16.858881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96744 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96744 ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96744 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96744 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:10.417 killing process with pid 96744 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96744' 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96744 00:38:10.417 [2024-10-09 14:07:16.951376] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:10.417 14:07:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96744 00:38:10.417 [2024-10-09 14:07:16.952457] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:10.676 14:07:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:38:10.676 ************************************ 00:38:10.676 END TEST raid_state_function_test_sb_4k 00:38:10.676 ************************************ 00:38:10.676 00:38:10.676 real 0m4.053s 00:38:10.676 user 0m6.365s 00:38:10.676 sys 0m0.918s 00:38:10.676 14:07:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:10.676 14:07:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.935 14:07:17 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:38:10.935 14:07:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:10.935 14:07:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:10.935 14:07:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:10.935 ************************************ 00:38:10.935 START TEST raid_superblock_test_4k 00:38:10.935 ************************************ 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:38:10.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96985 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96985 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96985 ']' 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:10.935 14:07:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:10.935 [2024-10-09 14:07:17.383564] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:10.935 [2024-10-09 14:07:17.383990] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96985 ] 00:38:11.194 [2024-10-09 14:07:17.557671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.194 [2024-10-09 14:07:17.602935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.194 [2024-10-09 14:07:17.646336] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:11.194 [2024-10-09 14:07:17.646380] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:11.761 malloc1 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:11.761 [2024-10-09 14:07:18.278499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:11.761 [2024-10-09 14:07:18.278729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:11.761 [2024-10-09 14:07:18.278795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:11.761 [2024-10-09 14:07:18.278898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:11.761 [2024-10-09 14:07:18.281411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:11.761 [2024-10-09 14:07:18.281568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:11.761 pt1 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:11.761 malloc2 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:11.761 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.020 [2024-10-09 14:07:18.315378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:12.020 [2024-10-09 14:07:18.315575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.020 [2024-10-09 14:07:18.315606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:12.020 [2024-10-09 14:07:18.315623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.020 [2024-10-09 14:07:18.318608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.020 [2024-10-09 14:07:18.318655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:12.020 pt2 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.020 [2024-10-09 14:07:18.327504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:12.020 [2024-10-09 14:07:18.329757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:12.020 [2024-10-09 14:07:18.330005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:38:12.020 [2024-10-09 14:07:18.330027] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:12.020 [2024-10-09 14:07:18.330319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:12.020 [2024-10-09 14:07:18.330453] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:38:12.020 [2024-10-09 14:07:18.330464] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:38:12.020 [2024-10-09 14:07:18.330604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:12.020 "name": "raid_bdev1", 00:38:12.020 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:12.020 "strip_size_kb": 0, 00:38:12.020 "state": "online", 00:38:12.020 "raid_level": "raid1", 00:38:12.020 "superblock": true, 00:38:12.020 "num_base_bdevs": 2, 00:38:12.020 "num_base_bdevs_discovered": 2, 00:38:12.020 "num_base_bdevs_operational": 2, 00:38:12.020 "base_bdevs_list": [ 00:38:12.020 { 00:38:12.020 "name": "pt1", 00:38:12.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:12.020 "is_configured": true, 00:38:12.020 "data_offset": 256, 00:38:12.020 "data_size": 7936 00:38:12.020 }, 00:38:12.020 { 00:38:12.020 "name": "pt2", 00:38:12.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:12.020 "is_configured": true, 00:38:12.020 "data_offset": 256, 00:38:12.020 "data_size": 7936 00:38:12.020 } 00:38:12.020 ] 00:38:12.020 }' 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:12.020 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.279 [2024-10-09 14:07:18.791880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:12.279 "name": "raid_bdev1", 00:38:12.279 "aliases": [ 00:38:12.279 "fa274135-79ad-418b-bce7-b91555eaf263" 00:38:12.279 ], 00:38:12.279 "product_name": "Raid Volume", 00:38:12.279 "block_size": 4096, 00:38:12.279 "num_blocks": 7936, 00:38:12.279 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:12.279 "assigned_rate_limits": { 00:38:12.279 "rw_ios_per_sec": 0, 00:38:12.279 "rw_mbytes_per_sec": 0, 00:38:12.279 "r_mbytes_per_sec": 0, 00:38:12.279 "w_mbytes_per_sec": 0 00:38:12.279 }, 00:38:12.279 "claimed": false, 00:38:12.279 "zoned": false, 00:38:12.279 "supported_io_types": { 00:38:12.279 "read": true, 00:38:12.279 "write": true, 00:38:12.279 "unmap": false, 00:38:12.279 "flush": false, 00:38:12.279 "reset": true, 00:38:12.279 "nvme_admin": false, 00:38:12.279 "nvme_io": false, 00:38:12.279 "nvme_io_md": false, 00:38:12.279 "write_zeroes": true, 00:38:12.279 "zcopy": false, 00:38:12.279 "get_zone_info": false, 00:38:12.279 "zone_management": false, 00:38:12.279 "zone_append": false, 00:38:12.279 "compare": false, 00:38:12.279 "compare_and_write": false, 00:38:12.279 "abort": false, 00:38:12.279 "seek_hole": false, 00:38:12.279 "seek_data": false, 00:38:12.279 "copy": false, 00:38:12.279 "nvme_iov_md": false 00:38:12.279 }, 00:38:12.279 "memory_domains": [ 00:38:12.279 { 00:38:12.279 "dma_device_id": "system", 00:38:12.279 "dma_device_type": 1 00:38:12.279 }, 00:38:12.279 { 00:38:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:12.279 "dma_device_type": 2 00:38:12.279 }, 00:38:12.279 { 00:38:12.279 "dma_device_id": "system", 00:38:12.279 "dma_device_type": 1 00:38:12.279 }, 00:38:12.279 { 00:38:12.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:12.279 "dma_device_type": 2 00:38:12.279 } 00:38:12.279 ], 00:38:12.279 "driver_specific": { 00:38:12.279 "raid": { 00:38:12.279 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:12.279 "strip_size_kb": 0, 00:38:12.279 "state": "online", 00:38:12.279 "raid_level": "raid1", 00:38:12.279 "superblock": true, 00:38:12.279 "num_base_bdevs": 2, 00:38:12.279 "num_base_bdevs_discovered": 2, 00:38:12.279 "num_base_bdevs_operational": 2, 00:38:12.279 "base_bdevs_list": [ 00:38:12.279 { 00:38:12.279 "name": "pt1", 00:38:12.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:12.279 "is_configured": true, 00:38:12.279 "data_offset": 256, 00:38:12.279 "data_size": 7936 00:38:12.279 }, 00:38:12.279 { 00:38:12.279 "name": "pt2", 00:38:12.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:12.279 "is_configured": true, 00:38:12.279 "data_offset": 256, 00:38:12.279 "data_size": 7936 00:38:12.279 } 00:38:12.279 ] 00:38:12.279 } 00:38:12.279 } 00:38:12.279 }' 00:38:12.279 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:12.538 pt2' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:12.538 14:07:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:38:12.538 [2024-10-09 14:07:19.011844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fa274135-79ad-418b-bce7-b91555eaf263 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fa274135-79ad-418b-bce7-b91555eaf263 ']' 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.538 [2024-10-09 14:07:19.059631] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:12.538 [2024-10-09 14:07:19.059660] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:12.538 [2024-10-09 14:07:19.059741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:12.538 [2024-10-09 14:07:19.059823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:12.538 [2024-10-09 14:07:19.059835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:38:12.538 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:12.798 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.799 [2024-10-09 14:07:19.187674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:12.799 [2024-10-09 14:07:19.189868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:12.799 [2024-10-09 14:07:19.189942] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:12.799 [2024-10-09 14:07:19.189991] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:12.799 [2024-10-09 14:07:19.190011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:12.799 [2024-10-09 14:07:19.190021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:38:12.799 request: 00:38:12.799 { 00:38:12.799 "name": "raid_bdev1", 00:38:12.799 "raid_level": "raid1", 00:38:12.799 "base_bdevs": [ 00:38:12.799 "malloc1", 00:38:12.799 "malloc2" 00:38:12.799 ], 00:38:12.799 "superblock": false, 00:38:12.799 "method": "bdev_raid_create", 00:38:12.799 "req_id": 1 00:38:12.799 } 00:38:12.799 Got JSON-RPC error response 00:38:12.799 response: 00:38:12.799 { 00:38:12.799 "code": -17, 00:38:12.799 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:12.799 } 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.799 [2024-10-09 14:07:19.251654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:12.799 [2024-10-09 14:07:19.251798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:12.799 [2024-10-09 14:07:19.251853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:12.799 [2024-10-09 14:07:19.251984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:12.799 [2024-10-09 14:07:19.254479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:12.799 [2024-10-09 14:07:19.254639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:12.799 [2024-10-09 14:07:19.254739] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:12.799 [2024-10-09 14:07:19.254778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:12.799 pt1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:12.799 "name": "raid_bdev1", 00:38:12.799 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:12.799 "strip_size_kb": 0, 00:38:12.799 "state": "configuring", 00:38:12.799 "raid_level": "raid1", 00:38:12.799 "superblock": true, 00:38:12.799 "num_base_bdevs": 2, 00:38:12.799 "num_base_bdevs_discovered": 1, 00:38:12.799 "num_base_bdevs_operational": 2, 00:38:12.799 "base_bdevs_list": [ 00:38:12.799 { 00:38:12.799 "name": "pt1", 00:38:12.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:12.799 "is_configured": true, 00:38:12.799 "data_offset": 256, 00:38:12.799 "data_size": 7936 00:38:12.799 }, 00:38:12.799 { 00:38:12.799 "name": null, 00:38:12.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:12.799 "is_configured": false, 00:38:12.799 "data_offset": 256, 00:38:12.799 "data_size": 7936 00:38:12.799 } 00:38:12.799 ] 00:38:12.799 }' 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:12.799 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.367 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.368 [2024-10-09 14:07:19.719796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:13.368 [2024-10-09 14:07:19.719975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:13.368 [2024-10-09 14:07:19.720008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:13.368 [2024-10-09 14:07:19.720020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:13.368 [2024-10-09 14:07:19.720444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:13.368 [2024-10-09 14:07:19.720463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:13.368 [2024-10-09 14:07:19.720538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:13.368 [2024-10-09 14:07:19.720579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:13.368 [2024-10-09 14:07:19.720685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:38:13.368 [2024-10-09 14:07:19.720696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:13.368 [2024-10-09 14:07:19.720936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:13.368 [2024-10-09 14:07:19.721045] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:38:13.368 [2024-10-09 14:07:19.721062] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:38:13.368 [2024-10-09 14:07:19.721163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:13.368 pt2 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:13.368 "name": "raid_bdev1", 00:38:13.368 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:13.368 "strip_size_kb": 0, 00:38:13.368 "state": "online", 00:38:13.368 "raid_level": "raid1", 00:38:13.368 "superblock": true, 00:38:13.368 "num_base_bdevs": 2, 00:38:13.368 "num_base_bdevs_discovered": 2, 00:38:13.368 "num_base_bdevs_operational": 2, 00:38:13.368 "base_bdevs_list": [ 00:38:13.368 { 00:38:13.368 "name": "pt1", 00:38:13.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:13.368 "is_configured": true, 00:38:13.368 "data_offset": 256, 00:38:13.368 "data_size": 7936 00:38:13.368 }, 00:38:13.368 { 00:38:13.368 "name": "pt2", 00:38:13.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:13.368 "is_configured": true, 00:38:13.368 "data_offset": 256, 00:38:13.368 "data_size": 7936 00:38:13.368 } 00:38:13.368 ] 00:38:13.368 }' 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:13.368 14:07:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:38:13.626 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:13.884 [2024-10-09 14:07:20.184183] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.884 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:13.884 "name": "raid_bdev1", 00:38:13.884 "aliases": [ 00:38:13.884 "fa274135-79ad-418b-bce7-b91555eaf263" 00:38:13.884 ], 00:38:13.884 "product_name": "Raid Volume", 00:38:13.884 "block_size": 4096, 00:38:13.884 "num_blocks": 7936, 00:38:13.885 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:13.885 "assigned_rate_limits": { 00:38:13.885 "rw_ios_per_sec": 0, 00:38:13.885 "rw_mbytes_per_sec": 0, 00:38:13.885 "r_mbytes_per_sec": 0, 00:38:13.885 "w_mbytes_per_sec": 0 00:38:13.885 }, 00:38:13.885 "claimed": false, 00:38:13.885 "zoned": false, 00:38:13.885 "supported_io_types": { 00:38:13.885 "read": true, 00:38:13.885 "write": true, 00:38:13.885 "unmap": false, 00:38:13.885 "flush": false, 00:38:13.885 "reset": true, 00:38:13.885 "nvme_admin": false, 00:38:13.885 "nvme_io": false, 00:38:13.885 "nvme_io_md": false, 00:38:13.885 "write_zeroes": true, 00:38:13.885 "zcopy": false, 00:38:13.885 "get_zone_info": false, 00:38:13.885 "zone_management": false, 00:38:13.885 "zone_append": false, 00:38:13.885 "compare": false, 00:38:13.885 "compare_and_write": false, 00:38:13.885 "abort": false, 00:38:13.885 "seek_hole": false, 00:38:13.885 "seek_data": false, 00:38:13.885 "copy": false, 00:38:13.885 "nvme_iov_md": false 00:38:13.885 }, 00:38:13.885 "memory_domains": [ 00:38:13.885 { 00:38:13.885 "dma_device_id": "system", 00:38:13.885 "dma_device_type": 1 00:38:13.885 }, 00:38:13.885 { 00:38:13.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:13.885 "dma_device_type": 2 00:38:13.885 }, 00:38:13.885 { 00:38:13.885 "dma_device_id": "system", 00:38:13.885 "dma_device_type": 1 00:38:13.885 }, 00:38:13.885 { 00:38:13.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:13.885 "dma_device_type": 2 00:38:13.885 } 00:38:13.885 ], 00:38:13.885 "driver_specific": { 00:38:13.885 "raid": { 00:38:13.885 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:13.885 "strip_size_kb": 0, 00:38:13.885 "state": "online", 00:38:13.885 "raid_level": "raid1", 00:38:13.885 "superblock": true, 00:38:13.885 "num_base_bdevs": 2, 00:38:13.885 "num_base_bdevs_discovered": 2, 00:38:13.885 "num_base_bdevs_operational": 2, 00:38:13.885 "base_bdevs_list": [ 00:38:13.885 { 00:38:13.885 "name": "pt1", 00:38:13.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:13.885 "is_configured": true, 00:38:13.885 "data_offset": 256, 00:38:13.885 "data_size": 7936 00:38:13.885 }, 00:38:13.885 { 00:38:13.885 "name": "pt2", 00:38:13.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:13.885 "is_configured": true, 00:38:13.885 "data_offset": 256, 00:38:13.885 "data_size": 7936 00:38:13.885 } 00:38:13.885 ] 00:38:13.885 } 00:38:13.885 } 00:38:13.885 }' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:13.885 pt2' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:38:13.885 [2024-10-09 14:07:20.408178] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:13.885 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fa274135-79ad-418b-bce7-b91555eaf263 '!=' fa274135-79ad-418b-bce7-b91555eaf263 ']' 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.144 [2024-10-09 14:07:20.455967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:14.144 "name": "raid_bdev1", 00:38:14.144 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:14.144 "strip_size_kb": 0, 00:38:14.144 "state": "online", 00:38:14.144 "raid_level": "raid1", 00:38:14.144 "superblock": true, 00:38:14.144 "num_base_bdevs": 2, 00:38:14.144 "num_base_bdevs_discovered": 1, 00:38:14.144 "num_base_bdevs_operational": 1, 00:38:14.144 "base_bdevs_list": [ 00:38:14.144 { 00:38:14.144 "name": null, 00:38:14.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.144 "is_configured": false, 00:38:14.144 "data_offset": 0, 00:38:14.144 "data_size": 7936 00:38:14.144 }, 00:38:14.144 { 00:38:14.144 "name": "pt2", 00:38:14.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:14.144 "is_configured": true, 00:38:14.144 "data_offset": 256, 00:38:14.144 "data_size": 7936 00:38:14.144 } 00:38:14.144 ] 00:38:14.144 }' 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:14.144 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.403 [2024-10-09 14:07:20.900029] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:14.403 [2024-10-09 14:07:20.900060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:14.403 [2024-10-09 14:07:20.900133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:14.403 [2024-10-09 14:07:20.900186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:14.403 [2024-10-09 14:07:20.900197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.403 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.662 [2024-10-09 14:07:20.972041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:14.662 [2024-10-09 14:07:20.972094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:14.662 [2024-10-09 14:07:20.972116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:14.662 [2024-10-09 14:07:20.972127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:14.662 [2024-10-09 14:07:20.974673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:14.662 [2024-10-09 14:07:20.974711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:14.662 [2024-10-09 14:07:20.974787] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:14.662 [2024-10-09 14:07:20.974817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:14.662 [2024-10-09 14:07:20.974891] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:38:14.662 [2024-10-09 14:07:20.974901] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:14.662 [2024-10-09 14:07:20.975130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:14.662 [2024-10-09 14:07:20.975244] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:38:14.662 [2024-10-09 14:07:20.975257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:38:14.662 [2024-10-09 14:07:20.975358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:14.662 pt2 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.662 14:07:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.662 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:14.662 "name": "raid_bdev1", 00:38:14.662 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:14.662 "strip_size_kb": 0, 00:38:14.662 "state": "online", 00:38:14.662 "raid_level": "raid1", 00:38:14.662 "superblock": true, 00:38:14.662 "num_base_bdevs": 2, 00:38:14.662 "num_base_bdevs_discovered": 1, 00:38:14.662 "num_base_bdevs_operational": 1, 00:38:14.662 "base_bdevs_list": [ 00:38:14.662 { 00:38:14.662 "name": null, 00:38:14.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.662 "is_configured": false, 00:38:14.662 "data_offset": 256, 00:38:14.662 "data_size": 7936 00:38:14.662 }, 00:38:14.662 { 00:38:14.662 "name": "pt2", 00:38:14.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:14.662 "is_configured": true, 00:38:14.662 "data_offset": 256, 00:38:14.662 "data_size": 7936 00:38:14.662 } 00:38:14.662 ] 00:38:14.662 }' 00:38:14.662 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:14.662 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 [2024-10-09 14:07:21.432201] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:14.921 [2024-10-09 14:07:21.432229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:14.921 [2024-10-09 14:07:21.432302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:14.921 [2024-10-09 14:07:21.432350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:14.921 [2024-10-09 14:07:21.432364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:14.921 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.181 [2024-10-09 14:07:21.492169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:15.181 [2024-10-09 14:07:21.492340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:15.181 [2024-10-09 14:07:21.492373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:15.181 [2024-10-09 14:07:21.492393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:15.181 [2024-10-09 14:07:21.494939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:15.181 [2024-10-09 14:07:21.494979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:15.181 [2024-10-09 14:07:21.495055] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:15.181 [2024-10-09 14:07:21.495095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:15.181 [2024-10-09 14:07:21.495189] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:15.181 [2024-10-09 14:07:21.495203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:15.181 [2024-10-09 14:07:21.495229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:38:15.181 [2024-10-09 14:07:21.495269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:15.181 [2024-10-09 14:07:21.495336] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:38:15.181 [2024-10-09 14:07:21.495351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:15.181 [2024-10-09 14:07:21.495610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:15.181 [2024-10-09 14:07:21.495723] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:38:15.181 [2024-10-09 14:07:21.495734] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:38:15.181 [2024-10-09 14:07:21.495842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:15.181 pt1 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:15.181 "name": "raid_bdev1", 00:38:15.181 "uuid": "fa274135-79ad-418b-bce7-b91555eaf263", 00:38:15.181 "strip_size_kb": 0, 00:38:15.181 "state": "online", 00:38:15.181 "raid_level": "raid1", 00:38:15.181 "superblock": true, 00:38:15.181 "num_base_bdevs": 2, 00:38:15.181 "num_base_bdevs_discovered": 1, 00:38:15.181 "num_base_bdevs_operational": 1, 00:38:15.181 "base_bdevs_list": [ 00:38:15.181 { 00:38:15.181 "name": null, 00:38:15.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:15.181 "is_configured": false, 00:38:15.181 "data_offset": 256, 00:38:15.181 "data_size": 7936 00:38:15.181 }, 00:38:15.181 { 00:38:15.181 "name": "pt2", 00:38:15.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:15.181 "is_configured": true, 00:38:15.181 "data_offset": 256, 00:38:15.181 "data_size": 7936 00:38:15.181 } 00:38:15.181 ] 00:38:15.181 }' 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:15.181 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.440 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:38:15.440 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.440 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.440 14:07:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:15.440 14:07:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.698 [2024-10-09 14:07:22.016521] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' fa274135-79ad-418b-bce7-b91555eaf263 '!=' fa274135-79ad-418b-bce7-b91555eaf263 ']' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96985 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96985 ']' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96985 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96985 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:15.698 killing process with pid 96985 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96985' 00:38:15.698 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96985 00:38:15.698 [2024-10-09 14:07:22.094487] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:15.698 [2024-10-09 14:07:22.094575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:15.698 [2024-10-09 14:07:22.094624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96985 00:38:15.698 ee all in destruct 00:38:15.698 [2024-10-09 14:07:22.094636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:38:15.698 [2024-10-09 14:07:22.118944] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:15.956 14:07:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:38:15.956 00:38:15.956 real 0m5.098s 00:38:15.956 user 0m8.367s 00:38:15.956 sys 0m1.145s 00:38:15.956 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:15.956 ************************************ 00:38:15.956 END TEST raid_superblock_test_4k 00:38:15.956 ************************************ 00:38:15.956 14:07:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:38:15.956 14:07:22 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:38:15.956 14:07:22 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:38:15.956 14:07:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:38:15.956 14:07:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:15.956 14:07:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:15.956 ************************************ 00:38:15.956 START TEST raid_rebuild_test_sb_4k 00:38:15.956 ************************************ 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97301 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97301 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97301 ']' 00:38:15.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:15.956 14:07:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:16.213 [2024-10-09 14:07:22.561718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:16.213 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:16.214 Zero copy mechanism will not be used. 00:38:16.214 [2024-10-09 14:07:22.561913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97301 ] 00:38:16.214 [2024-10-09 14:07:22.737495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.472 [2024-10-09 14:07:22.781301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.472 [2024-10-09 14:07:22.824602] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:16.472 [2024-10-09 14:07:22.824649] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.039 BaseBdev1_malloc 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.039 [2024-10-09 14:07:23.416699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:17.039 [2024-10-09 14:07:23.416761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.039 [2024-10-09 14:07:23.416787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:17.039 [2024-10-09 14:07:23.416814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.039 [2024-10-09 14:07:23.419350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.039 [2024-10-09 14:07:23.419390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:17.039 BaseBdev1 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.039 BaseBdev2_malloc 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.039 [2024-10-09 14:07:23.454571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:17.039 [2024-10-09 14:07:23.454623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.039 [2024-10-09 14:07:23.454646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:17.039 [2024-10-09 14:07:23.454658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.039 [2024-10-09 14:07:23.457105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.039 [2024-10-09 14:07:23.457143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:17.039 BaseBdev2 00:38:17.039 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 spare_malloc 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 spare_delay 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 [2024-10-09 14:07:23.491683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:17.040 [2024-10-09 14:07:23.491738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:17.040 [2024-10-09 14:07:23.491763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:17.040 [2024-10-09 14:07:23.491774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:17.040 [2024-10-09 14:07:23.494264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:17.040 [2024-10-09 14:07:23.494304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:17.040 spare 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 [2024-10-09 14:07:23.503734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:17.040 [2024-10-09 14:07:23.505957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:17.040 [2024-10-09 14:07:23.506110] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:38:17.040 [2024-10-09 14:07:23.506124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:17.040 [2024-10-09 14:07:23.506379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:17.040 [2024-10-09 14:07:23.506502] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:38:17.040 [2024-10-09 14:07:23.506516] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:38:17.040 [2024-10-09 14:07:23.506651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:17.040 "name": "raid_bdev1", 00:38:17.040 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:17.040 "strip_size_kb": 0, 00:38:17.040 "state": "online", 00:38:17.040 "raid_level": "raid1", 00:38:17.040 "superblock": true, 00:38:17.040 "num_base_bdevs": 2, 00:38:17.040 "num_base_bdevs_discovered": 2, 00:38:17.040 "num_base_bdevs_operational": 2, 00:38:17.040 "base_bdevs_list": [ 00:38:17.040 { 00:38:17.040 "name": "BaseBdev1", 00:38:17.040 "uuid": "04e952ec-ec82-539a-ab72-2e74960074c4", 00:38:17.040 "is_configured": true, 00:38:17.040 "data_offset": 256, 00:38:17.040 "data_size": 7936 00:38:17.040 }, 00:38:17.040 { 00:38:17.040 "name": "BaseBdev2", 00:38:17.040 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:17.040 "is_configured": true, 00:38:17.040 "data_offset": 256, 00:38:17.040 "data_size": 7936 00:38:17.040 } 00:38:17.040 ] 00:38:17.040 }' 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:17.040 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.607 [2024-10-09 14:07:23.960107] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:17.607 14:07:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:17.607 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:17.867 [2024-10-09 14:07:24.315970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:17.867 /dev/nbd0 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:17.867 1+0 records in 00:38:17.867 1+0 records out 00:38:17.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651323 s, 6.3 MB/s 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:38:17.867 14:07:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:38:18.803 7936+0 records in 00:38:18.803 7936+0 records out 00:38:18.803 32505856 bytes (33 MB, 31 MiB) copied, 0.703002 s, 46.2 MB/s 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:18.803 [2024-10-09 14:07:25.285345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:18.803 [2024-10-09 14:07:25.297729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:18.803 "name": "raid_bdev1", 00:38:18.803 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:18.803 "strip_size_kb": 0, 00:38:18.803 "state": "online", 00:38:18.803 "raid_level": "raid1", 00:38:18.803 "superblock": true, 00:38:18.803 "num_base_bdevs": 2, 00:38:18.803 "num_base_bdevs_discovered": 1, 00:38:18.803 "num_base_bdevs_operational": 1, 00:38:18.803 "base_bdevs_list": [ 00:38:18.803 { 00:38:18.803 "name": null, 00:38:18.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:18.803 "is_configured": false, 00:38:18.803 "data_offset": 0, 00:38:18.803 "data_size": 7936 00:38:18.803 }, 00:38:18.803 { 00:38:18.803 "name": "BaseBdev2", 00:38:18.803 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:18.803 "is_configured": true, 00:38:18.803 "data_offset": 256, 00:38:18.803 "data_size": 7936 00:38:18.803 } 00:38:18.803 ] 00:38:18.803 }' 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:18.803 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:19.372 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:19.372 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:19.372 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:19.372 [2024-10-09 14:07:25.713828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:19.372 [2024-10-09 14:07:25.718152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:38:19.372 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:19.372 14:07:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:19.372 [2024-10-09 14:07:25.720403] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.306 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:20.307 "name": "raid_bdev1", 00:38:20.307 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:20.307 "strip_size_kb": 0, 00:38:20.307 "state": "online", 00:38:20.307 "raid_level": "raid1", 00:38:20.307 "superblock": true, 00:38:20.307 "num_base_bdevs": 2, 00:38:20.307 "num_base_bdevs_discovered": 2, 00:38:20.307 "num_base_bdevs_operational": 2, 00:38:20.307 "process": { 00:38:20.307 "type": "rebuild", 00:38:20.307 "target": "spare", 00:38:20.307 "progress": { 00:38:20.307 "blocks": 2560, 00:38:20.307 "percent": 32 00:38:20.307 } 00:38:20.307 }, 00:38:20.307 "base_bdevs_list": [ 00:38:20.307 { 00:38:20.307 "name": "spare", 00:38:20.307 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:20.307 "is_configured": true, 00:38:20.307 "data_offset": 256, 00:38:20.307 "data_size": 7936 00:38:20.307 }, 00:38:20.307 { 00:38:20.307 "name": "BaseBdev2", 00:38:20.307 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:20.307 "is_configured": true, 00:38:20.307 "data_offset": 256, 00:38:20.307 "data_size": 7936 00:38:20.307 } 00:38:20.307 ] 00:38:20.307 }' 00:38:20.307 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:20.307 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:20.307 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:20.566 [2024-10-09 14:07:26.875336] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:20.566 [2024-10-09 14:07:26.927887] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:20.566 [2024-10-09 14:07:26.927950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:20.566 [2024-10-09 14:07:26.927988] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:20.566 [2024-10-09 14:07:26.927997] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:20.566 "name": "raid_bdev1", 00:38:20.566 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:20.566 "strip_size_kb": 0, 00:38:20.566 "state": "online", 00:38:20.566 "raid_level": "raid1", 00:38:20.566 "superblock": true, 00:38:20.566 "num_base_bdevs": 2, 00:38:20.566 "num_base_bdevs_discovered": 1, 00:38:20.566 "num_base_bdevs_operational": 1, 00:38:20.566 "base_bdevs_list": [ 00:38:20.566 { 00:38:20.566 "name": null, 00:38:20.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.566 "is_configured": false, 00:38:20.566 "data_offset": 0, 00:38:20.566 "data_size": 7936 00:38:20.566 }, 00:38:20.566 { 00:38:20.566 "name": "BaseBdev2", 00:38:20.566 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:20.566 "is_configured": true, 00:38:20.566 "data_offset": 256, 00:38:20.566 "data_size": 7936 00:38:20.566 } 00:38:20.566 ] 00:38:20.566 }' 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:20.566 14:07:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:21.134 "name": "raid_bdev1", 00:38:21.134 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:21.134 "strip_size_kb": 0, 00:38:21.134 "state": "online", 00:38:21.134 "raid_level": "raid1", 00:38:21.134 "superblock": true, 00:38:21.134 "num_base_bdevs": 2, 00:38:21.134 "num_base_bdevs_discovered": 1, 00:38:21.134 "num_base_bdevs_operational": 1, 00:38:21.134 "base_bdevs_list": [ 00:38:21.134 { 00:38:21.134 "name": null, 00:38:21.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:21.134 "is_configured": false, 00:38:21.134 "data_offset": 0, 00:38:21.134 "data_size": 7936 00:38:21.134 }, 00:38:21.134 { 00:38:21.134 "name": "BaseBdev2", 00:38:21.134 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:21.134 "is_configured": true, 00:38:21.134 "data_offset": 256, 00:38:21.134 "data_size": 7936 00:38:21.134 } 00:38:21.134 ] 00:38:21.134 }' 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:21.134 [2024-10-09 14:07:27.520896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:21.134 [2024-10-09 14:07:27.525227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:21.134 14:07:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:21.134 [2024-10-09 14:07:27.527585] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:22.142 "name": "raid_bdev1", 00:38:22.142 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:22.142 "strip_size_kb": 0, 00:38:22.142 "state": "online", 00:38:22.142 "raid_level": "raid1", 00:38:22.142 "superblock": true, 00:38:22.142 "num_base_bdevs": 2, 00:38:22.142 "num_base_bdevs_discovered": 2, 00:38:22.142 "num_base_bdevs_operational": 2, 00:38:22.142 "process": { 00:38:22.142 "type": "rebuild", 00:38:22.142 "target": "spare", 00:38:22.142 "progress": { 00:38:22.142 "blocks": 2560, 00:38:22.142 "percent": 32 00:38:22.142 } 00:38:22.142 }, 00:38:22.142 "base_bdevs_list": [ 00:38:22.142 { 00:38:22.142 "name": "spare", 00:38:22.142 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:22.142 "is_configured": true, 00:38:22.142 "data_offset": 256, 00:38:22.142 "data_size": 7936 00:38:22.142 }, 00:38:22.142 { 00:38:22.142 "name": "BaseBdev2", 00:38:22.142 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:22.142 "is_configured": true, 00:38:22.142 "data_offset": 256, 00:38:22.142 "data_size": 7936 00:38:22.142 } 00:38:22.142 ] 00:38:22.142 }' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:22.142 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:38:22.142 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=582 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:22.143 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:22.401 "name": "raid_bdev1", 00:38:22.401 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:22.401 "strip_size_kb": 0, 00:38:22.401 "state": "online", 00:38:22.401 "raid_level": "raid1", 00:38:22.401 "superblock": true, 00:38:22.401 "num_base_bdevs": 2, 00:38:22.401 "num_base_bdevs_discovered": 2, 00:38:22.401 "num_base_bdevs_operational": 2, 00:38:22.401 "process": { 00:38:22.401 "type": "rebuild", 00:38:22.401 "target": "spare", 00:38:22.401 "progress": { 00:38:22.401 "blocks": 2816, 00:38:22.401 "percent": 35 00:38:22.401 } 00:38:22.401 }, 00:38:22.401 "base_bdevs_list": [ 00:38:22.401 { 00:38:22.401 "name": "spare", 00:38:22.401 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:22.401 "is_configured": true, 00:38:22.401 "data_offset": 256, 00:38:22.401 "data_size": 7936 00:38:22.401 }, 00:38:22.401 { 00:38:22.401 "name": "BaseBdev2", 00:38:22.401 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:22.401 "is_configured": true, 00:38:22.401 "data_offset": 256, 00:38:22.401 "data_size": 7936 00:38:22.401 } 00:38:22.401 ] 00:38:22.401 }' 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:22.401 14:07:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:23.337 "name": "raid_bdev1", 00:38:23.337 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:23.337 "strip_size_kb": 0, 00:38:23.337 "state": "online", 00:38:23.337 "raid_level": "raid1", 00:38:23.337 "superblock": true, 00:38:23.337 "num_base_bdevs": 2, 00:38:23.337 "num_base_bdevs_discovered": 2, 00:38:23.337 "num_base_bdevs_operational": 2, 00:38:23.337 "process": { 00:38:23.337 "type": "rebuild", 00:38:23.337 "target": "spare", 00:38:23.337 "progress": { 00:38:23.337 "blocks": 5632, 00:38:23.337 "percent": 70 00:38:23.337 } 00:38:23.337 }, 00:38:23.337 "base_bdevs_list": [ 00:38:23.337 { 00:38:23.337 "name": "spare", 00:38:23.337 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:23.337 "is_configured": true, 00:38:23.337 "data_offset": 256, 00:38:23.337 "data_size": 7936 00:38:23.337 }, 00:38:23.337 { 00:38:23.337 "name": "BaseBdev2", 00:38:23.337 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:23.337 "is_configured": true, 00:38:23.337 "data_offset": 256, 00:38:23.337 "data_size": 7936 00:38:23.337 } 00:38:23.337 ] 00:38:23.337 }' 00:38:23.337 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:23.595 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:23.595 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:23.595 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:23.595 14:07:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:24.163 [2024-10-09 14:07:30.645284] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:24.163 [2024-10-09 14:07:30.645400] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:24.163 [2024-10-09 14:07:30.645497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.421 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:24.680 14:07:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:24.680 "name": "raid_bdev1", 00:38:24.680 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:24.680 "strip_size_kb": 0, 00:38:24.680 "state": "online", 00:38:24.680 "raid_level": "raid1", 00:38:24.680 "superblock": true, 00:38:24.680 "num_base_bdevs": 2, 00:38:24.680 "num_base_bdevs_discovered": 2, 00:38:24.680 "num_base_bdevs_operational": 2, 00:38:24.680 "base_bdevs_list": [ 00:38:24.680 { 00:38:24.680 "name": "spare", 00:38:24.680 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:24.680 "is_configured": true, 00:38:24.680 "data_offset": 256, 00:38:24.680 "data_size": 7936 00:38:24.680 }, 00:38:24.680 { 00:38:24.680 "name": "BaseBdev2", 00:38:24.680 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:24.680 "is_configured": true, 00:38:24.680 "data_offset": 256, 00:38:24.680 "data_size": 7936 00:38:24.680 } 00:38:24.680 ] 00:38:24.680 }' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:24.680 "name": "raid_bdev1", 00:38:24.680 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:24.680 "strip_size_kb": 0, 00:38:24.680 "state": "online", 00:38:24.680 "raid_level": "raid1", 00:38:24.680 "superblock": true, 00:38:24.680 "num_base_bdevs": 2, 00:38:24.680 "num_base_bdevs_discovered": 2, 00:38:24.680 "num_base_bdevs_operational": 2, 00:38:24.680 "base_bdevs_list": [ 00:38:24.680 { 00:38:24.680 "name": "spare", 00:38:24.680 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:24.680 "is_configured": true, 00:38:24.680 "data_offset": 256, 00:38:24.680 "data_size": 7936 00:38:24.680 }, 00:38:24.680 { 00:38:24.680 "name": "BaseBdev2", 00:38:24.680 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:24.680 "is_configured": true, 00:38:24.680 "data_offset": 256, 00:38:24.680 "data_size": 7936 00:38:24.680 } 00:38:24.680 ] 00:38:24.680 }' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:24.680 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:24.939 "name": "raid_bdev1", 00:38:24.939 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:24.939 "strip_size_kb": 0, 00:38:24.939 "state": "online", 00:38:24.939 "raid_level": "raid1", 00:38:24.939 "superblock": true, 00:38:24.939 "num_base_bdevs": 2, 00:38:24.939 "num_base_bdevs_discovered": 2, 00:38:24.939 "num_base_bdevs_operational": 2, 00:38:24.939 "base_bdevs_list": [ 00:38:24.939 { 00:38:24.939 "name": "spare", 00:38:24.939 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:24.939 "is_configured": true, 00:38:24.939 "data_offset": 256, 00:38:24.939 "data_size": 7936 00:38:24.939 }, 00:38:24.939 { 00:38:24.939 "name": "BaseBdev2", 00:38:24.939 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:24.939 "is_configured": true, 00:38:24.939 "data_offset": 256, 00:38:24.939 "data_size": 7936 00:38:24.939 } 00:38:24.939 ] 00:38:24.939 }' 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:24.939 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:25.198 [2024-10-09 14:07:31.690029] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:25.198 [2024-10-09 14:07:31.690178] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:25.198 [2024-10-09 14:07:31.690283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:25.198 [2024-10-09 14:07:31.690351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:25.198 [2024-10-09 14:07:31.690367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:25.198 14:07:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:25.765 /dev/nbd0 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:25.765 1+0 records in 00:38:25.765 1+0 records out 00:38:25.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218521 s, 18.7 MB/s 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:25.765 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:26.023 /dev/nbd1 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:26.023 1+0 records in 00:38:26.023 1+0 records out 00:38:26.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440377 s, 9.3 MB/s 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:26.023 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:26.281 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:26.540 14:07:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:26.540 [2024-10-09 14:07:33.014694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:26.540 [2024-10-09 14:07:33.014869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:26.540 [2024-10-09 14:07:33.014899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:26.540 [2024-10-09 14:07:33.014917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:26.540 [2024-10-09 14:07:33.017459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:26.540 [2024-10-09 14:07:33.017504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:26.540 [2024-10-09 14:07:33.017598] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:26.540 [2024-10-09 14:07:33.017661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:26.540 [2024-10-09 14:07:33.017767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:26.540 spare 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.540 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:26.798 [2024-10-09 14:07:33.117856] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:38:26.798 [2024-10-09 14:07:33.117881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:26.798 [2024-10-09 14:07:33.118171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:38:26.798 [2024-10-09 14:07:33.118311] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:38:26.798 [2024-10-09 14:07:33.118327] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:38:26.798 [2024-10-09 14:07:33.118456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:26.798 "name": "raid_bdev1", 00:38:26.798 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:26.798 "strip_size_kb": 0, 00:38:26.798 "state": "online", 00:38:26.798 "raid_level": "raid1", 00:38:26.798 "superblock": true, 00:38:26.798 "num_base_bdevs": 2, 00:38:26.798 "num_base_bdevs_discovered": 2, 00:38:26.798 "num_base_bdevs_operational": 2, 00:38:26.798 "base_bdevs_list": [ 00:38:26.798 { 00:38:26.798 "name": "spare", 00:38:26.798 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:26.798 "is_configured": true, 00:38:26.798 "data_offset": 256, 00:38:26.798 "data_size": 7936 00:38:26.798 }, 00:38:26.798 { 00:38:26.798 "name": "BaseBdev2", 00:38:26.798 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:26.798 "is_configured": true, 00:38:26.798 "data_offset": 256, 00:38:26.798 "data_size": 7936 00:38:26.798 } 00:38:26.798 ] 00:38:26.798 }' 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:26.798 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:27.057 "name": "raid_bdev1", 00:38:27.057 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:27.057 "strip_size_kb": 0, 00:38:27.057 "state": "online", 00:38:27.057 "raid_level": "raid1", 00:38:27.057 "superblock": true, 00:38:27.057 "num_base_bdevs": 2, 00:38:27.057 "num_base_bdevs_discovered": 2, 00:38:27.057 "num_base_bdevs_operational": 2, 00:38:27.057 "base_bdevs_list": [ 00:38:27.057 { 00:38:27.057 "name": "spare", 00:38:27.057 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:27.057 "is_configured": true, 00:38:27.057 "data_offset": 256, 00:38:27.057 "data_size": 7936 00:38:27.057 }, 00:38:27.057 { 00:38:27.057 "name": "BaseBdev2", 00:38:27.057 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:27.057 "is_configured": true, 00:38:27.057 "data_offset": 256, 00:38:27.057 "data_size": 7936 00:38:27.057 } 00:38:27.057 ] 00:38:27.057 }' 00:38:27.057 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.315 [2024-10-09 14:07:33.738890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:27.315 "name": "raid_bdev1", 00:38:27.315 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:27.315 "strip_size_kb": 0, 00:38:27.315 "state": "online", 00:38:27.315 "raid_level": "raid1", 00:38:27.315 "superblock": true, 00:38:27.315 "num_base_bdevs": 2, 00:38:27.315 "num_base_bdevs_discovered": 1, 00:38:27.315 "num_base_bdevs_operational": 1, 00:38:27.315 "base_bdevs_list": [ 00:38:27.315 { 00:38:27.315 "name": null, 00:38:27.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:27.315 "is_configured": false, 00:38:27.315 "data_offset": 0, 00:38:27.315 "data_size": 7936 00:38:27.315 }, 00:38:27.315 { 00:38:27.315 "name": "BaseBdev2", 00:38:27.315 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:27.315 "is_configured": true, 00:38:27.315 "data_offset": 256, 00:38:27.315 "data_size": 7936 00:38:27.315 } 00:38:27.315 ] 00:38:27.315 }' 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:27.315 14:07:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.882 14:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:27.882 14:07:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:27.882 14:07:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:27.882 [2024-10-09 14:07:34.187048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:27.882 [2024-10-09 14:07:34.187237] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:27.882 [2024-10-09 14:07:34.187253] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:27.882 [2024-10-09 14:07:34.187301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:27.882 [2024-10-09 14:07:34.191427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:38:27.882 14:07:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:27.882 14:07:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:27.882 [2024-10-09 14:07:34.194109] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:28.816 "name": "raid_bdev1", 00:38:28.816 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:28.816 "strip_size_kb": 0, 00:38:28.816 "state": "online", 00:38:28.816 "raid_level": "raid1", 00:38:28.816 "superblock": true, 00:38:28.816 "num_base_bdevs": 2, 00:38:28.816 "num_base_bdevs_discovered": 2, 00:38:28.816 "num_base_bdevs_operational": 2, 00:38:28.816 "process": { 00:38:28.816 "type": "rebuild", 00:38:28.816 "target": "spare", 00:38:28.816 "progress": { 00:38:28.816 "blocks": 2560, 00:38:28.816 "percent": 32 00:38:28.816 } 00:38:28.816 }, 00:38:28.816 "base_bdevs_list": [ 00:38:28.816 { 00:38:28.816 "name": "spare", 00:38:28.816 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:28.816 "is_configured": true, 00:38:28.816 "data_offset": 256, 00:38:28.816 "data_size": 7936 00:38:28.816 }, 00:38:28.816 { 00:38:28.816 "name": "BaseBdev2", 00:38:28.816 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:28.816 "is_configured": true, 00:38:28.816 "data_offset": 256, 00:38:28.816 "data_size": 7936 00:38:28.816 } 00:38:28.816 ] 00:38:28.816 }' 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:28.816 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:28.816 [2024-10-09 14:07:35.336705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:29.075 [2024-10-09 14:07:35.400663] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:29.075 [2024-10-09 14:07:35.400840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:29.075 [2024-10-09 14:07:35.400866] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:29.075 [2024-10-09 14:07:35.400877] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:29.075 "name": "raid_bdev1", 00:38:29.075 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:29.075 "strip_size_kb": 0, 00:38:29.075 "state": "online", 00:38:29.075 "raid_level": "raid1", 00:38:29.075 "superblock": true, 00:38:29.075 "num_base_bdevs": 2, 00:38:29.075 "num_base_bdevs_discovered": 1, 00:38:29.075 "num_base_bdevs_operational": 1, 00:38:29.075 "base_bdevs_list": [ 00:38:29.075 { 00:38:29.075 "name": null, 00:38:29.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:29.075 "is_configured": false, 00:38:29.075 "data_offset": 0, 00:38:29.075 "data_size": 7936 00:38:29.075 }, 00:38:29.075 { 00:38:29.075 "name": "BaseBdev2", 00:38:29.075 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:29.075 "is_configured": true, 00:38:29.075 "data_offset": 256, 00:38:29.075 "data_size": 7936 00:38:29.075 } 00:38:29.075 ] 00:38:29.075 }' 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:29.075 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:29.334 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:29.334 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:29.334 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:29.334 [2024-10-09 14:07:35.853288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:29.334 [2024-10-09 14:07:35.853507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:29.334 [2024-10-09 14:07:35.853592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:29.334 [2024-10-09 14:07:35.853609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:29.334 [2024-10-09 14:07:35.854096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:29.334 [2024-10-09 14:07:35.854118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:29.334 [2024-10-09 14:07:35.854208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:29.334 [2024-10-09 14:07:35.854222] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:29.334 [2024-10-09 14:07:35.854243] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:29.334 [2024-10-09 14:07:35.854265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:29.334 [2024-10-09 14:07:35.858489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:38:29.334 spare 00:38:29.334 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:29.334 14:07:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:29.334 [2024-10-09 14:07:35.860920] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.707 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:30.707 "name": "raid_bdev1", 00:38:30.707 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:30.707 "strip_size_kb": 0, 00:38:30.707 "state": "online", 00:38:30.707 "raid_level": "raid1", 00:38:30.707 "superblock": true, 00:38:30.707 "num_base_bdevs": 2, 00:38:30.707 "num_base_bdevs_discovered": 2, 00:38:30.707 "num_base_bdevs_operational": 2, 00:38:30.707 "process": { 00:38:30.707 "type": "rebuild", 00:38:30.707 "target": "spare", 00:38:30.707 "progress": { 00:38:30.707 "blocks": 2560, 00:38:30.707 "percent": 32 00:38:30.707 } 00:38:30.707 }, 00:38:30.707 "base_bdevs_list": [ 00:38:30.707 { 00:38:30.707 "name": "spare", 00:38:30.707 "uuid": "2aad96cf-9afe-56df-94a9-4674846d5563", 00:38:30.708 "is_configured": true, 00:38:30.708 "data_offset": 256, 00:38:30.708 "data_size": 7936 00:38:30.708 }, 00:38:30.708 { 00:38:30.708 "name": "BaseBdev2", 00:38:30.708 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:30.708 "is_configured": true, 00:38:30.708 "data_offset": 256, 00:38:30.708 "data_size": 7936 00:38:30.708 } 00:38:30.708 ] 00:38:30.708 }' 00:38:30.708 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:30.708 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:30.708 14:07:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:30.708 [2024-10-09 14:07:37.015056] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:30.708 [2024-10-09 14:07:37.067436] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:30.708 [2024-10-09 14:07:37.067645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:30.708 [2024-10-09 14:07:37.067740] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:30.708 [2024-10-09 14:07:37.067783] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:30.708 "name": "raid_bdev1", 00:38:30.708 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:30.708 "strip_size_kb": 0, 00:38:30.708 "state": "online", 00:38:30.708 "raid_level": "raid1", 00:38:30.708 "superblock": true, 00:38:30.708 "num_base_bdevs": 2, 00:38:30.708 "num_base_bdevs_discovered": 1, 00:38:30.708 "num_base_bdevs_operational": 1, 00:38:30.708 "base_bdevs_list": [ 00:38:30.708 { 00:38:30.708 "name": null, 00:38:30.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:30.708 "is_configured": false, 00:38:30.708 "data_offset": 0, 00:38:30.708 "data_size": 7936 00:38:30.708 }, 00:38:30.708 { 00:38:30.708 "name": "BaseBdev2", 00:38:30.708 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:30.708 "is_configured": true, 00:38:30.708 "data_offset": 256, 00:38:30.708 "data_size": 7936 00:38:30.708 } 00:38:30.708 ] 00:38:30.708 }' 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:30.708 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:31.273 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:31.274 "name": "raid_bdev1", 00:38:31.274 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:31.274 "strip_size_kb": 0, 00:38:31.274 "state": "online", 00:38:31.274 "raid_level": "raid1", 00:38:31.274 "superblock": true, 00:38:31.274 "num_base_bdevs": 2, 00:38:31.274 "num_base_bdevs_discovered": 1, 00:38:31.274 "num_base_bdevs_operational": 1, 00:38:31.274 "base_bdevs_list": [ 00:38:31.274 { 00:38:31.274 "name": null, 00:38:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:31.274 "is_configured": false, 00:38:31.274 "data_offset": 0, 00:38:31.274 "data_size": 7936 00:38:31.274 }, 00:38:31.274 { 00:38:31.274 "name": "BaseBdev2", 00:38:31.274 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:31.274 "is_configured": true, 00:38:31.274 "data_offset": 256, 00:38:31.274 "data_size": 7936 00:38:31.274 } 00:38:31.274 ] 00:38:31.274 }' 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:31.274 [2024-10-09 14:07:37.684149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:31.274 [2024-10-09 14:07:37.684212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:31.274 [2024-10-09 14:07:37.684236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:31.274 [2024-10-09 14:07:37.684250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:31.274 [2024-10-09 14:07:37.684678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:31.274 [2024-10-09 14:07:37.684704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:31.274 [2024-10-09 14:07:37.684783] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:31.274 [2024-10-09 14:07:37.684807] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:31.274 [2024-10-09 14:07:37.684817] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:31.274 [2024-10-09 14:07:37.684833] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:31.274 BaseBdev1 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.274 14:07:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:32.210 "name": "raid_bdev1", 00:38:32.210 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:32.210 "strip_size_kb": 0, 00:38:32.210 "state": "online", 00:38:32.210 "raid_level": "raid1", 00:38:32.210 "superblock": true, 00:38:32.210 "num_base_bdevs": 2, 00:38:32.210 "num_base_bdevs_discovered": 1, 00:38:32.210 "num_base_bdevs_operational": 1, 00:38:32.210 "base_bdevs_list": [ 00:38:32.210 { 00:38:32.210 "name": null, 00:38:32.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:32.210 "is_configured": false, 00:38:32.210 "data_offset": 0, 00:38:32.210 "data_size": 7936 00:38:32.210 }, 00:38:32.210 { 00:38:32.210 "name": "BaseBdev2", 00:38:32.210 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:32.210 "is_configured": true, 00:38:32.210 "data_offset": 256, 00:38:32.210 "data_size": 7936 00:38:32.210 } 00:38:32.210 ] 00:38:32.210 }' 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:32.210 14:07:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:32.846 "name": "raid_bdev1", 00:38:32.846 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:32.846 "strip_size_kb": 0, 00:38:32.846 "state": "online", 00:38:32.846 "raid_level": "raid1", 00:38:32.846 "superblock": true, 00:38:32.846 "num_base_bdevs": 2, 00:38:32.846 "num_base_bdevs_discovered": 1, 00:38:32.846 "num_base_bdevs_operational": 1, 00:38:32.846 "base_bdevs_list": [ 00:38:32.846 { 00:38:32.846 "name": null, 00:38:32.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:32.846 "is_configured": false, 00:38:32.846 "data_offset": 0, 00:38:32.846 "data_size": 7936 00:38:32.846 }, 00:38:32.846 { 00:38:32.846 "name": "BaseBdev2", 00:38:32.846 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:32.846 "is_configured": true, 00:38:32.846 "data_offset": 256, 00:38:32.846 "data_size": 7936 00:38:32.846 } 00:38:32.846 ] 00:38:32.846 }' 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:32.846 [2024-10-09 14:07:39.292539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:32.846 [2024-10-09 14:07:39.292718] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:32.846 [2024-10-09 14:07:39.292738] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:32.846 request: 00:38:32.846 { 00:38:32.846 "base_bdev": "BaseBdev1", 00:38:32.846 "raid_bdev": "raid_bdev1", 00:38:32.846 "method": "bdev_raid_add_base_bdev", 00:38:32.846 "req_id": 1 00:38:32.846 } 00:38:32.846 Got JSON-RPC error response 00:38:32.846 response: 00:38:32.846 { 00:38:32.846 "code": -22, 00:38:32.846 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:32.846 } 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:32.846 14:07:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:33.781 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:34.040 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.040 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:34.040 "name": "raid_bdev1", 00:38:34.040 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:34.040 "strip_size_kb": 0, 00:38:34.040 "state": "online", 00:38:34.040 "raid_level": "raid1", 00:38:34.040 "superblock": true, 00:38:34.040 "num_base_bdevs": 2, 00:38:34.040 "num_base_bdevs_discovered": 1, 00:38:34.040 "num_base_bdevs_operational": 1, 00:38:34.040 "base_bdevs_list": [ 00:38:34.040 { 00:38:34.040 "name": null, 00:38:34.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:34.040 "is_configured": false, 00:38:34.040 "data_offset": 0, 00:38:34.040 "data_size": 7936 00:38:34.040 }, 00:38:34.040 { 00:38:34.040 "name": "BaseBdev2", 00:38:34.040 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:34.040 "is_configured": true, 00:38:34.040 "data_offset": 256, 00:38:34.040 "data_size": 7936 00:38:34.040 } 00:38:34.040 ] 00:38:34.040 }' 00:38:34.040 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:34.040 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:34.299 "name": "raid_bdev1", 00:38:34.299 "uuid": "b563f961-5f76-4812-846b-8a18ede60555", 00:38:34.299 "strip_size_kb": 0, 00:38:34.299 "state": "online", 00:38:34.299 "raid_level": "raid1", 00:38:34.299 "superblock": true, 00:38:34.299 "num_base_bdevs": 2, 00:38:34.299 "num_base_bdevs_discovered": 1, 00:38:34.299 "num_base_bdevs_operational": 1, 00:38:34.299 "base_bdevs_list": [ 00:38:34.299 { 00:38:34.299 "name": null, 00:38:34.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:34.299 "is_configured": false, 00:38:34.299 "data_offset": 0, 00:38:34.299 "data_size": 7936 00:38:34.299 }, 00:38:34.299 { 00:38:34.299 "name": "BaseBdev2", 00:38:34.299 "uuid": "5e0d69cf-3d84-5f90-a2ac-3df3a4d1fa61", 00:38:34.299 "is_configured": true, 00:38:34.299 "data_offset": 256, 00:38:34.299 "data_size": 7936 00:38:34.299 } 00:38:34.299 ] 00:38:34.299 }' 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:34.299 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97301 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97301 ']' 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97301 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97301 00:38:34.557 killing process with pid 97301 00:38:34.557 Received shutdown signal, test time was about 60.000000 seconds 00:38:34.557 00:38:34.557 Latency(us) 00:38:34.557 [2024-10-09T14:07:41.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.557 [2024-10-09T14:07:41.108Z] =================================================================================================================== 00:38:34.557 [2024-10-09T14:07:41.108Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:34.557 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:34.558 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97301' 00:38:34.558 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97301 00:38:34.558 [2024-10-09 14:07:40.930952] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:34.558 14:07:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97301 00:38:34.558 [2024-10-09 14:07:40.931075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:34.558 [2024-10-09 14:07:40.931127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:34.558 [2024-10-09 14:07:40.931138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:38:34.558 [2024-10-09 14:07:40.962509] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:34.816 14:07:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:38:34.816 00:38:34.816 real 0m18.761s 00:38:34.816 user 0m24.979s 00:38:34.816 sys 0m2.866s 00:38:34.816 14:07:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:34.816 14:07:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:38:34.816 ************************************ 00:38:34.816 END TEST raid_rebuild_test_sb_4k 00:38:34.816 ************************************ 00:38:34.816 14:07:41 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:38:34.816 14:07:41 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:38:34.816 14:07:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:38:34.816 14:07:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:34.816 14:07:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:34.816 ************************************ 00:38:34.816 START TEST raid_state_function_test_sb_md_separate 00:38:34.816 ************************************ 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97981 00:38:34.816 Process raid pid: 97981 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97981' 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97981 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97981 ']' 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.816 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.817 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.817 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:34.817 14:07:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:34.817 [2024-10-09 14:07:41.357351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:34.817 [2024-10-09 14:07:41.357512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.075 [2024-10-09 14:07:41.516279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.075 [2024-10-09 14:07:41.560835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.075 [2024-10-09 14:07:41.604295] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:35.075 [2024-10-09 14:07:41.604345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:36.009 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:36.009 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:38:36.009 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.010 [2024-10-09 14:07:42.339112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:36.010 [2024-10-09 14:07:42.339172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:36.010 [2024-10-09 14:07:42.339193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:36.010 [2024-10-09 14:07:42.339208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:36.010 "name": "Existed_Raid", 00:38:36.010 "uuid": "9850b57f-75e1-4d60-84f5-58a2fc093416", 00:38:36.010 "strip_size_kb": 0, 00:38:36.010 "state": "configuring", 00:38:36.010 "raid_level": "raid1", 00:38:36.010 "superblock": true, 00:38:36.010 "num_base_bdevs": 2, 00:38:36.010 "num_base_bdevs_discovered": 0, 00:38:36.010 "num_base_bdevs_operational": 2, 00:38:36.010 "base_bdevs_list": [ 00:38:36.010 { 00:38:36.010 "name": "BaseBdev1", 00:38:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.010 "is_configured": false, 00:38:36.010 "data_offset": 0, 00:38:36.010 "data_size": 0 00:38:36.010 }, 00:38:36.010 { 00:38:36.010 "name": "BaseBdev2", 00:38:36.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.010 "is_configured": false, 00:38:36.010 "data_offset": 0, 00:38:36.010 "data_size": 0 00:38:36.010 } 00:38:36.010 ] 00:38:36.010 }' 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:36.010 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.269 [2024-10-09 14:07:42.803098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:36.269 [2024-10-09 14:07:42.803147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.269 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.269 [2024-10-09 14:07:42.815151] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:36.269 [2024-10-09 14:07:42.815193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:36.269 [2024-10-09 14:07:42.815203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:36.269 [2024-10-09 14:07:42.815215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.527 [2024-10-09 14:07:42.833148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:36.527 BaseBdev1 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.527 [ 00:38:36.527 { 00:38:36.527 "name": "BaseBdev1", 00:38:36.527 "aliases": [ 00:38:36.527 "2b4f04a9-6664-4100-af33-4cf7bfa209b3" 00:38:36.527 ], 00:38:36.527 "product_name": "Malloc disk", 00:38:36.527 "block_size": 4096, 00:38:36.527 "num_blocks": 8192, 00:38:36.527 "uuid": "2b4f04a9-6664-4100-af33-4cf7bfa209b3", 00:38:36.527 "md_size": 32, 00:38:36.527 "md_interleave": false, 00:38:36.527 "dif_type": 0, 00:38:36.527 "assigned_rate_limits": { 00:38:36.527 "rw_ios_per_sec": 0, 00:38:36.527 "rw_mbytes_per_sec": 0, 00:38:36.527 "r_mbytes_per_sec": 0, 00:38:36.527 "w_mbytes_per_sec": 0 00:38:36.527 }, 00:38:36.527 "claimed": true, 00:38:36.527 "claim_type": "exclusive_write", 00:38:36.527 "zoned": false, 00:38:36.527 "supported_io_types": { 00:38:36.527 "read": true, 00:38:36.527 "write": true, 00:38:36.527 "unmap": true, 00:38:36.527 "flush": true, 00:38:36.527 "reset": true, 00:38:36.527 "nvme_admin": false, 00:38:36.527 "nvme_io": false, 00:38:36.527 "nvme_io_md": false, 00:38:36.527 "write_zeroes": true, 00:38:36.527 "zcopy": true, 00:38:36.527 "get_zone_info": false, 00:38:36.527 "zone_management": false, 00:38:36.527 "zone_append": false, 00:38:36.527 "compare": false, 00:38:36.527 "compare_and_write": false, 00:38:36.527 "abort": true, 00:38:36.527 "seek_hole": false, 00:38:36.527 "seek_data": false, 00:38:36.527 "copy": true, 00:38:36.527 "nvme_iov_md": false 00:38:36.527 }, 00:38:36.527 "memory_domains": [ 00:38:36.527 { 00:38:36.527 "dma_device_id": "system", 00:38:36.527 "dma_device_type": 1 00:38:36.527 }, 00:38:36.527 { 00:38:36.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:36.527 "dma_device_type": 2 00:38:36.527 } 00:38:36.527 ], 00:38:36.527 "driver_specific": {} 00:38:36.527 } 00:38:36.527 ] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:36.527 "name": "Existed_Raid", 00:38:36.527 "uuid": "19dd2ac9-9ad8-4901-80f9-11072afaaff5", 00:38:36.527 "strip_size_kb": 0, 00:38:36.527 "state": "configuring", 00:38:36.527 "raid_level": "raid1", 00:38:36.527 "superblock": true, 00:38:36.527 "num_base_bdevs": 2, 00:38:36.527 "num_base_bdevs_discovered": 1, 00:38:36.527 "num_base_bdevs_operational": 2, 00:38:36.527 "base_bdevs_list": [ 00:38:36.527 { 00:38:36.527 "name": "BaseBdev1", 00:38:36.527 "uuid": "2b4f04a9-6664-4100-af33-4cf7bfa209b3", 00:38:36.527 "is_configured": true, 00:38:36.527 "data_offset": 256, 00:38:36.527 "data_size": 7936 00:38:36.527 }, 00:38:36.527 { 00:38:36.527 "name": "BaseBdev2", 00:38:36.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:36.527 "is_configured": false, 00:38:36.527 "data_offset": 0, 00:38:36.527 "data_size": 0 00:38:36.527 } 00:38:36.527 ] 00:38:36.527 }' 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:36.527 14:07:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.786 [2024-10-09 14:07:43.313329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:36.786 [2024-10-09 14:07:43.313521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:36.786 [2024-10-09 14:07:43.325393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:36.786 [2024-10-09 14:07:43.327682] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:36.786 [2024-10-09 14:07:43.327723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:36.786 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:36.787 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:37.045 "name": "Existed_Raid", 00:38:37.045 "uuid": "92af60d4-d3f9-4abc-8323-fb444be0ad10", 00:38:37.045 "strip_size_kb": 0, 00:38:37.045 "state": "configuring", 00:38:37.045 "raid_level": "raid1", 00:38:37.045 "superblock": true, 00:38:37.045 "num_base_bdevs": 2, 00:38:37.045 "num_base_bdevs_discovered": 1, 00:38:37.045 "num_base_bdevs_operational": 2, 00:38:37.045 "base_bdevs_list": [ 00:38:37.045 { 00:38:37.045 "name": "BaseBdev1", 00:38:37.045 "uuid": "2b4f04a9-6664-4100-af33-4cf7bfa209b3", 00:38:37.045 "is_configured": true, 00:38:37.045 "data_offset": 256, 00:38:37.045 "data_size": 7936 00:38:37.045 }, 00:38:37.045 { 00:38:37.045 "name": "BaseBdev2", 00:38:37.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:37.045 "is_configured": false, 00:38:37.045 "data_offset": 0, 00:38:37.045 "data_size": 0 00:38:37.045 } 00:38:37.045 ] 00:38:37.045 }' 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:37.045 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.304 [2024-10-09 14:07:43.768499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:37.304 [2024-10-09 14:07:43.768957] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:38:37.304 BaseBdev2 00:38:37.304 [2024-10-09 14:07:43.769104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:37.304 [2024-10-09 14:07:43.769257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:37.304 [2024-10-09 14:07:43.769403] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:38:37.304 [2024-10-09 14:07:43.769426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:38:37.304 [2024-10-09 14:07:43.769513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:37.304 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.305 [ 00:38:37.305 { 00:38:37.305 "name": "BaseBdev2", 00:38:37.305 "aliases": [ 00:38:37.305 "e071dfd7-6196-4704-bca1-5d65ed400ed9" 00:38:37.305 ], 00:38:37.305 "product_name": "Malloc disk", 00:38:37.305 "block_size": 4096, 00:38:37.305 "num_blocks": 8192, 00:38:37.305 "uuid": "e071dfd7-6196-4704-bca1-5d65ed400ed9", 00:38:37.305 "md_size": 32, 00:38:37.305 "md_interleave": false, 00:38:37.305 "dif_type": 0, 00:38:37.305 "assigned_rate_limits": { 00:38:37.305 "rw_ios_per_sec": 0, 00:38:37.305 "rw_mbytes_per_sec": 0, 00:38:37.305 "r_mbytes_per_sec": 0, 00:38:37.305 "w_mbytes_per_sec": 0 00:38:37.305 }, 00:38:37.305 "claimed": true, 00:38:37.305 "claim_type": "exclusive_write", 00:38:37.305 "zoned": false, 00:38:37.305 "supported_io_types": { 00:38:37.305 "read": true, 00:38:37.305 "write": true, 00:38:37.305 "unmap": true, 00:38:37.305 "flush": true, 00:38:37.305 "reset": true, 00:38:37.305 "nvme_admin": false, 00:38:37.305 "nvme_io": false, 00:38:37.305 "nvme_io_md": false, 00:38:37.305 "write_zeroes": true, 00:38:37.305 "zcopy": true, 00:38:37.305 "get_zone_info": false, 00:38:37.305 "zone_management": false, 00:38:37.305 "zone_append": false, 00:38:37.305 "compare": false, 00:38:37.305 "compare_and_write": false, 00:38:37.305 "abort": true, 00:38:37.305 "seek_hole": false, 00:38:37.305 "seek_data": false, 00:38:37.305 "copy": true, 00:38:37.305 "nvme_iov_md": false 00:38:37.305 }, 00:38:37.305 "memory_domains": [ 00:38:37.305 { 00:38:37.305 "dma_device_id": "system", 00:38:37.305 "dma_device_type": 1 00:38:37.305 }, 00:38:37.305 { 00:38:37.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:37.305 "dma_device_type": 2 00:38:37.305 } 00:38:37.305 ], 00:38:37.305 "driver_specific": {} 00:38:37.305 } 00:38:37.305 ] 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:37.305 "name": "Existed_Raid", 00:38:37.305 "uuid": "92af60d4-d3f9-4abc-8323-fb444be0ad10", 00:38:37.305 "strip_size_kb": 0, 00:38:37.305 "state": "online", 00:38:37.305 "raid_level": "raid1", 00:38:37.305 "superblock": true, 00:38:37.305 "num_base_bdevs": 2, 00:38:37.305 "num_base_bdevs_discovered": 2, 00:38:37.305 "num_base_bdevs_operational": 2, 00:38:37.305 "base_bdevs_list": [ 00:38:37.305 { 00:38:37.305 "name": "BaseBdev1", 00:38:37.305 "uuid": "2b4f04a9-6664-4100-af33-4cf7bfa209b3", 00:38:37.305 "is_configured": true, 00:38:37.305 "data_offset": 256, 00:38:37.305 "data_size": 7936 00:38:37.305 }, 00:38:37.305 { 00:38:37.305 "name": "BaseBdev2", 00:38:37.305 "uuid": "e071dfd7-6196-4704-bca1-5d65ed400ed9", 00:38:37.305 "is_configured": true, 00:38:37.305 "data_offset": 256, 00:38:37.305 "data_size": 7936 00:38:37.305 } 00:38:37.305 ] 00:38:37.305 }' 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:37.305 14:07:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.874 [2024-10-09 14:07:44.233001] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:37.874 "name": "Existed_Raid", 00:38:37.874 "aliases": [ 00:38:37.874 "92af60d4-d3f9-4abc-8323-fb444be0ad10" 00:38:37.874 ], 00:38:37.874 "product_name": "Raid Volume", 00:38:37.874 "block_size": 4096, 00:38:37.874 "num_blocks": 7936, 00:38:37.874 "uuid": "92af60d4-d3f9-4abc-8323-fb444be0ad10", 00:38:37.874 "md_size": 32, 00:38:37.874 "md_interleave": false, 00:38:37.874 "dif_type": 0, 00:38:37.874 "assigned_rate_limits": { 00:38:37.874 "rw_ios_per_sec": 0, 00:38:37.874 "rw_mbytes_per_sec": 0, 00:38:37.874 "r_mbytes_per_sec": 0, 00:38:37.874 "w_mbytes_per_sec": 0 00:38:37.874 }, 00:38:37.874 "claimed": false, 00:38:37.874 "zoned": false, 00:38:37.874 "supported_io_types": { 00:38:37.874 "read": true, 00:38:37.874 "write": true, 00:38:37.874 "unmap": false, 00:38:37.874 "flush": false, 00:38:37.874 "reset": true, 00:38:37.874 "nvme_admin": false, 00:38:37.874 "nvme_io": false, 00:38:37.874 "nvme_io_md": false, 00:38:37.874 "write_zeroes": true, 00:38:37.874 "zcopy": false, 00:38:37.874 "get_zone_info": false, 00:38:37.874 "zone_management": false, 00:38:37.874 "zone_append": false, 00:38:37.874 "compare": false, 00:38:37.874 "compare_and_write": false, 00:38:37.874 "abort": false, 00:38:37.874 "seek_hole": false, 00:38:37.874 "seek_data": false, 00:38:37.874 "copy": false, 00:38:37.874 "nvme_iov_md": false 00:38:37.874 }, 00:38:37.874 "memory_domains": [ 00:38:37.874 { 00:38:37.874 "dma_device_id": "system", 00:38:37.874 "dma_device_type": 1 00:38:37.874 }, 00:38:37.874 { 00:38:37.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:37.874 "dma_device_type": 2 00:38:37.874 }, 00:38:37.874 { 00:38:37.874 "dma_device_id": "system", 00:38:37.874 "dma_device_type": 1 00:38:37.874 }, 00:38:37.874 { 00:38:37.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:37.874 "dma_device_type": 2 00:38:37.874 } 00:38:37.874 ], 00:38:37.874 "driver_specific": { 00:38:37.874 "raid": { 00:38:37.874 "uuid": "92af60d4-d3f9-4abc-8323-fb444be0ad10", 00:38:37.874 "strip_size_kb": 0, 00:38:37.874 "state": "online", 00:38:37.874 "raid_level": "raid1", 00:38:37.874 "superblock": true, 00:38:37.874 "num_base_bdevs": 2, 00:38:37.874 "num_base_bdevs_discovered": 2, 00:38:37.874 "num_base_bdevs_operational": 2, 00:38:37.874 "base_bdevs_list": [ 00:38:37.874 { 00:38:37.874 "name": "BaseBdev1", 00:38:37.874 "uuid": "2b4f04a9-6664-4100-af33-4cf7bfa209b3", 00:38:37.874 "is_configured": true, 00:38:37.874 "data_offset": 256, 00:38:37.874 "data_size": 7936 00:38:37.874 }, 00:38:37.874 { 00:38:37.874 "name": "BaseBdev2", 00:38:37.874 "uuid": "e071dfd7-6196-4704-bca1-5d65ed400ed9", 00:38:37.874 "is_configured": true, 00:38:37.874 "data_offset": 256, 00:38:37.874 "data_size": 7936 00:38:37.874 } 00:38:37.874 ] 00:38:37.874 } 00:38:37.874 } 00:38:37.874 }' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:38:37.874 BaseBdev2' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:37.874 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.133 [2024-10-09 14:07:44.436795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:38.133 "name": "Existed_Raid", 00:38:38.133 "uuid": "92af60d4-d3f9-4abc-8323-fb444be0ad10", 00:38:38.133 "strip_size_kb": 0, 00:38:38.133 "state": "online", 00:38:38.133 "raid_level": "raid1", 00:38:38.133 "superblock": true, 00:38:38.133 "num_base_bdevs": 2, 00:38:38.133 "num_base_bdevs_discovered": 1, 00:38:38.133 "num_base_bdevs_operational": 1, 00:38:38.133 "base_bdevs_list": [ 00:38:38.133 { 00:38:38.133 "name": null, 00:38:38.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:38.133 "is_configured": false, 00:38:38.133 "data_offset": 0, 00:38:38.133 "data_size": 7936 00:38:38.133 }, 00:38:38.133 { 00:38:38.133 "name": "BaseBdev2", 00:38:38.133 "uuid": "e071dfd7-6196-4704-bca1-5d65ed400ed9", 00:38:38.133 "is_configured": true, 00:38:38.133 "data_offset": 256, 00:38:38.133 "data_size": 7936 00:38:38.133 } 00:38:38.133 ] 00:38:38.133 }' 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:38.133 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.392 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.651 [2024-10-09 14:07:44.945995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:38.651 [2024-10-09 14:07:44.946245] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:38.651 [2024-10-09 14:07:44.959685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:38.651 [2024-10-09 14:07:44.959897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:38.651 [2024-10-09 14:07:44.960078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.651 14:07:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97981 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97981 ']' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97981 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97981 00:38:38.651 killing process with pid 97981 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97981' 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97981 00:38:38.651 [2024-10-09 14:07:45.047058] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:38.651 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97981 00:38:38.651 [2024-10-09 14:07:45.048132] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:38.910 ************************************ 00:38:38.910 END TEST raid_state_function_test_sb_md_separate 00:38:38.910 ************************************ 00:38:38.910 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:38:38.910 00:38:38.911 real 0m4.021s 00:38:38.911 user 0m6.394s 00:38:38.911 sys 0m0.807s 00:38:38.911 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:38.911 14:07:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:38.911 14:07:45 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:38:38.911 14:07:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:38.911 14:07:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:38.911 14:07:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:38.911 ************************************ 00:38:38.911 START TEST raid_superblock_test_md_separate 00:38:38.911 ************************************ 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98221 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98221 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98221 ']' 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:38.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:38.911 14:07:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.169 [2024-10-09 14:07:45.472822] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:39.169 [2024-10-09 14:07:45.473242] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98221 ] 00:38:39.169 [2024-10-09 14:07:45.653087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.169 [2024-10-09 14:07:45.696414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.428 [2024-10-09 14:07:45.739738] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:39.428 [2024-10-09 14:07:45.739987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:38:39.995 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 malloc1 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 [2024-10-09 14:07:46.432530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:39.996 [2024-10-09 14:07:46.432605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:39.996 [2024-10-09 14:07:46.432631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:39.996 [2024-10-09 14:07:46.432655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:39.996 [2024-10-09 14:07:46.434934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:39.996 [2024-10-09 14:07:46.434978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:39.996 pt1 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 malloc2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 [2024-10-09 14:07:46.471914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:39.996 [2024-10-09 14:07:46.471994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:39.996 [2024-10-09 14:07:46.472022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:39.996 [2024-10-09 14:07:46.472041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:39.996 [2024-10-09 14:07:46.474855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:39.996 [2024-10-09 14:07:46.474894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:39.996 pt2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 [2024-10-09 14:07:46.483889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:39.996 [2024-10-09 14:07:46.486093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:39.996 [2024-10-09 14:07:46.486234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:38:39.996 [2024-10-09 14:07:46.486253] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:39.996 [2024-10-09 14:07:46.486346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:39.996 [2024-10-09 14:07:46.486450] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:38:39.996 [2024-10-09 14:07:46.486463] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:38:39.996 [2024-10-09 14:07:46.486569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:39.996 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:39.996 "name": "raid_bdev1", 00:38:39.996 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:39.996 "strip_size_kb": 0, 00:38:39.996 "state": "online", 00:38:39.996 "raid_level": "raid1", 00:38:39.996 "superblock": true, 00:38:39.996 "num_base_bdevs": 2, 00:38:39.996 "num_base_bdevs_discovered": 2, 00:38:39.996 "num_base_bdevs_operational": 2, 00:38:39.996 "base_bdevs_list": [ 00:38:39.996 { 00:38:39.996 "name": "pt1", 00:38:39.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:39.996 "is_configured": true, 00:38:39.996 "data_offset": 256, 00:38:39.996 "data_size": 7936 00:38:39.996 }, 00:38:39.996 { 00:38:39.996 "name": "pt2", 00:38:39.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:39.996 "is_configured": true, 00:38:39.996 "data_offset": 256, 00:38:39.996 "data_size": 7936 00:38:39.996 } 00:38:39.997 ] 00:38:39.997 }' 00:38:39.997 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:39.997 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.565 [2024-10-09 14:07:46.880305] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:40.565 "name": "raid_bdev1", 00:38:40.565 "aliases": [ 00:38:40.565 "27c61503-8e91-4d3b-84c8-67f437752472" 00:38:40.565 ], 00:38:40.565 "product_name": "Raid Volume", 00:38:40.565 "block_size": 4096, 00:38:40.565 "num_blocks": 7936, 00:38:40.565 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:40.565 "md_size": 32, 00:38:40.565 "md_interleave": false, 00:38:40.565 "dif_type": 0, 00:38:40.565 "assigned_rate_limits": { 00:38:40.565 "rw_ios_per_sec": 0, 00:38:40.565 "rw_mbytes_per_sec": 0, 00:38:40.565 "r_mbytes_per_sec": 0, 00:38:40.565 "w_mbytes_per_sec": 0 00:38:40.565 }, 00:38:40.565 "claimed": false, 00:38:40.565 "zoned": false, 00:38:40.565 "supported_io_types": { 00:38:40.565 "read": true, 00:38:40.565 "write": true, 00:38:40.565 "unmap": false, 00:38:40.565 "flush": false, 00:38:40.565 "reset": true, 00:38:40.565 "nvme_admin": false, 00:38:40.565 "nvme_io": false, 00:38:40.565 "nvme_io_md": false, 00:38:40.565 "write_zeroes": true, 00:38:40.565 "zcopy": false, 00:38:40.565 "get_zone_info": false, 00:38:40.565 "zone_management": false, 00:38:40.565 "zone_append": false, 00:38:40.565 "compare": false, 00:38:40.565 "compare_and_write": false, 00:38:40.565 "abort": false, 00:38:40.565 "seek_hole": false, 00:38:40.565 "seek_data": false, 00:38:40.565 "copy": false, 00:38:40.565 "nvme_iov_md": false 00:38:40.565 }, 00:38:40.565 "memory_domains": [ 00:38:40.565 { 00:38:40.565 "dma_device_id": "system", 00:38:40.565 "dma_device_type": 1 00:38:40.565 }, 00:38:40.565 { 00:38:40.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:40.565 "dma_device_type": 2 00:38:40.565 }, 00:38:40.565 { 00:38:40.565 "dma_device_id": "system", 00:38:40.565 "dma_device_type": 1 00:38:40.565 }, 00:38:40.565 { 00:38:40.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:40.565 "dma_device_type": 2 00:38:40.565 } 00:38:40.565 ], 00:38:40.565 "driver_specific": { 00:38:40.565 "raid": { 00:38:40.565 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:40.565 "strip_size_kb": 0, 00:38:40.565 "state": "online", 00:38:40.565 "raid_level": "raid1", 00:38:40.565 "superblock": true, 00:38:40.565 "num_base_bdevs": 2, 00:38:40.565 "num_base_bdevs_discovered": 2, 00:38:40.565 "num_base_bdevs_operational": 2, 00:38:40.565 "base_bdevs_list": [ 00:38:40.565 { 00:38:40.565 "name": "pt1", 00:38:40.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:40.565 "is_configured": true, 00:38:40.565 "data_offset": 256, 00:38:40.565 "data_size": 7936 00:38:40.565 }, 00:38:40.565 { 00:38:40.565 "name": "pt2", 00:38:40.565 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:40.565 "is_configured": true, 00:38:40.565 "data_offset": 256, 00:38:40.565 "data_size": 7936 00:38:40.565 } 00:38:40.565 ] 00:38:40.565 } 00:38:40.565 } 00:38:40.565 }' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:40.565 pt2' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:38:40.565 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:40.566 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:40.566 14:07:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:40.566 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.566 14:07:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.566 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.566 [2024-10-09 14:07:47.100202] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27c61503-8e91-4d3b-84c8-67f437752472 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 27c61503-8e91-4d3b-84c8-67f437752472 ']' 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.825 [2024-10-09 14:07:47.135990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:40.825 [2024-10-09 14:07:47.136129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:40.825 [2024-10-09 14:07:47.136221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:40.825 [2024-10-09 14:07:47.136281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:40.825 [2024-10-09 14:07:47.136293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.825 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 [2024-10-09 14:07:47.268030] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:40.826 [2024-10-09 14:07:47.270303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:40.826 [2024-10-09 14:07:47.270373] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:40.826 [2024-10-09 14:07:47.270432] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:40.826 [2024-10-09 14:07:47.270454] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:40.826 [2024-10-09 14:07:47.270465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:38:40.826 request: 00:38:40.826 { 00:38:40.826 "name": "raid_bdev1", 00:38:40.826 "raid_level": "raid1", 00:38:40.826 "base_bdevs": [ 00:38:40.826 "malloc1", 00:38:40.826 "malloc2" 00:38:40.826 ], 00:38:40.826 "superblock": false, 00:38:40.826 "method": "bdev_raid_create", 00:38:40.826 "req_id": 1 00:38:40.826 } 00:38:40.826 Got JSON-RPC error response 00:38:40.826 response: 00:38:40.826 { 00:38:40.826 "code": -17, 00:38:40.826 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:40.826 } 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 [2024-10-09 14:07:47.324002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:40.826 [2024-10-09 14:07:47.324155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:40.826 [2024-10-09 14:07:47.324210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:38:40.826 [2024-10-09 14:07:47.324284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:40.826 [2024-10-09 14:07:47.326548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:40.826 [2024-10-09 14:07:47.326686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:40.826 [2024-10-09 14:07:47.326854] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:40.826 [2024-10-09 14:07:47.326926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:40.826 pt1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:40.826 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.086 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:41.086 "name": "raid_bdev1", 00:38:41.086 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:41.086 "strip_size_kb": 0, 00:38:41.086 "state": "configuring", 00:38:41.086 "raid_level": "raid1", 00:38:41.086 "superblock": true, 00:38:41.086 "num_base_bdevs": 2, 00:38:41.086 "num_base_bdevs_discovered": 1, 00:38:41.086 "num_base_bdevs_operational": 2, 00:38:41.086 "base_bdevs_list": [ 00:38:41.086 { 00:38:41.086 "name": "pt1", 00:38:41.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:41.086 "is_configured": true, 00:38:41.086 "data_offset": 256, 00:38:41.086 "data_size": 7936 00:38:41.086 }, 00:38:41.086 { 00:38:41.086 "name": null, 00:38:41.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:41.086 "is_configured": false, 00:38:41.086 "data_offset": 256, 00:38:41.086 "data_size": 7936 00:38:41.086 } 00:38:41.086 ] 00:38:41.086 }' 00:38:41.086 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:41.086 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.363 [2024-10-09 14:07:47.768151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:41.363 [2024-10-09 14:07:47.768341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:41.363 [2024-10-09 14:07:47.768376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:41.363 [2024-10-09 14:07:47.768389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:41.363 [2024-10-09 14:07:47.768615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:41.363 [2024-10-09 14:07:47.768631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:41.363 [2024-10-09 14:07:47.768688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:41.363 [2024-10-09 14:07:47.768708] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:41.363 [2024-10-09 14:07:47.768802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:38:41.363 [2024-10-09 14:07:47.768813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:41.363 [2024-10-09 14:07:47.768885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:41.363 [2024-10-09 14:07:47.768963] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:38:41.363 [2024-10-09 14:07:47.768978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:38:41.363 [2024-10-09 14:07:47.769044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:41.363 pt2 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:41.363 "name": "raid_bdev1", 00:38:41.363 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:41.363 "strip_size_kb": 0, 00:38:41.363 "state": "online", 00:38:41.363 "raid_level": "raid1", 00:38:41.363 "superblock": true, 00:38:41.363 "num_base_bdevs": 2, 00:38:41.363 "num_base_bdevs_discovered": 2, 00:38:41.363 "num_base_bdevs_operational": 2, 00:38:41.363 "base_bdevs_list": [ 00:38:41.363 { 00:38:41.363 "name": "pt1", 00:38:41.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:41.363 "is_configured": true, 00:38:41.363 "data_offset": 256, 00:38:41.363 "data_size": 7936 00:38:41.363 }, 00:38:41.363 { 00:38:41.363 "name": "pt2", 00:38:41.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:41.363 "is_configured": true, 00:38:41.363 "data_offset": 256, 00:38:41.363 "data_size": 7936 00:38:41.363 } 00:38:41.363 ] 00:38:41.363 }' 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:41.363 14:07:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.955 [2024-10-09 14:07:48.208513] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.955 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:41.955 "name": "raid_bdev1", 00:38:41.955 "aliases": [ 00:38:41.955 "27c61503-8e91-4d3b-84c8-67f437752472" 00:38:41.955 ], 00:38:41.955 "product_name": "Raid Volume", 00:38:41.955 "block_size": 4096, 00:38:41.955 "num_blocks": 7936, 00:38:41.955 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:41.955 "md_size": 32, 00:38:41.955 "md_interleave": false, 00:38:41.955 "dif_type": 0, 00:38:41.955 "assigned_rate_limits": { 00:38:41.955 "rw_ios_per_sec": 0, 00:38:41.955 "rw_mbytes_per_sec": 0, 00:38:41.955 "r_mbytes_per_sec": 0, 00:38:41.955 "w_mbytes_per_sec": 0 00:38:41.955 }, 00:38:41.955 "claimed": false, 00:38:41.955 "zoned": false, 00:38:41.955 "supported_io_types": { 00:38:41.955 "read": true, 00:38:41.955 "write": true, 00:38:41.955 "unmap": false, 00:38:41.955 "flush": false, 00:38:41.955 "reset": true, 00:38:41.955 "nvme_admin": false, 00:38:41.955 "nvme_io": false, 00:38:41.955 "nvme_io_md": false, 00:38:41.955 "write_zeroes": true, 00:38:41.955 "zcopy": false, 00:38:41.955 "get_zone_info": false, 00:38:41.955 "zone_management": false, 00:38:41.955 "zone_append": false, 00:38:41.955 "compare": false, 00:38:41.955 "compare_and_write": false, 00:38:41.955 "abort": false, 00:38:41.955 "seek_hole": false, 00:38:41.955 "seek_data": false, 00:38:41.955 "copy": false, 00:38:41.955 "nvme_iov_md": false 00:38:41.955 }, 00:38:41.955 "memory_domains": [ 00:38:41.955 { 00:38:41.955 "dma_device_id": "system", 00:38:41.955 "dma_device_type": 1 00:38:41.955 }, 00:38:41.955 { 00:38:41.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:41.955 "dma_device_type": 2 00:38:41.955 }, 00:38:41.955 { 00:38:41.955 "dma_device_id": "system", 00:38:41.955 "dma_device_type": 1 00:38:41.955 }, 00:38:41.955 { 00:38:41.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:41.955 "dma_device_type": 2 00:38:41.955 } 00:38:41.955 ], 00:38:41.955 "driver_specific": { 00:38:41.955 "raid": { 00:38:41.955 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:41.955 "strip_size_kb": 0, 00:38:41.955 "state": "online", 00:38:41.955 "raid_level": "raid1", 00:38:41.955 "superblock": true, 00:38:41.955 "num_base_bdevs": 2, 00:38:41.955 "num_base_bdevs_discovered": 2, 00:38:41.955 "num_base_bdevs_operational": 2, 00:38:41.955 "base_bdevs_list": [ 00:38:41.955 { 00:38:41.955 "name": "pt1", 00:38:41.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:41.955 "is_configured": true, 00:38:41.955 "data_offset": 256, 00:38:41.955 "data_size": 7936 00:38:41.955 }, 00:38:41.955 { 00:38:41.955 "name": "pt2", 00:38:41.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:41.955 "is_configured": true, 00:38:41.955 "data_offset": 256, 00:38:41.955 "data_size": 7936 00:38:41.955 } 00:38:41.955 ] 00:38:41.955 } 00:38:41.955 } 00:38:41.955 }' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:38:41.956 pt2' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.956 [2024-10-09 14:07:48.424472] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 27c61503-8e91-4d3b-84c8-67f437752472 '!=' 27c61503-8e91-4d3b-84c8-67f437752472 ']' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.956 [2024-10-09 14:07:48.468261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:41.956 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.215 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.215 "name": "raid_bdev1", 00:38:42.215 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:42.215 "strip_size_kb": 0, 00:38:42.215 "state": "online", 00:38:42.215 "raid_level": "raid1", 00:38:42.215 "superblock": true, 00:38:42.215 "num_base_bdevs": 2, 00:38:42.215 "num_base_bdevs_discovered": 1, 00:38:42.215 "num_base_bdevs_operational": 1, 00:38:42.215 "base_bdevs_list": [ 00:38:42.215 { 00:38:42.215 "name": null, 00:38:42.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.215 "is_configured": false, 00:38:42.215 "data_offset": 0, 00:38:42.215 "data_size": 7936 00:38:42.215 }, 00:38:42.215 { 00:38:42.215 "name": "pt2", 00:38:42.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:42.215 "is_configured": true, 00:38:42.215 "data_offset": 256, 00:38:42.215 "data_size": 7936 00:38:42.215 } 00:38:42.215 ] 00:38:42.215 }' 00:38:42.215 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.215 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 [2024-10-09 14:07:48.916346] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:42.473 [2024-10-09 14:07:48.916493] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:42.473 [2024-10-09 14:07:48.916704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:42.473 [2024-10-09 14:07:48.916792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:42.473 [2024-10-09 14:07:48.916907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:38:42.473 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.474 [2024-10-09 14:07:48.988346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:42.474 [2024-10-09 14:07:48.988522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.474 [2024-10-09 14:07:48.988556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:38:42.474 [2024-10-09 14:07:48.988586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.474 [2024-10-09 14:07:48.990992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.474 [2024-10-09 14:07:48.991022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:42.474 [2024-10-09 14:07:48.991083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:42.474 [2024-10-09 14:07:48.991113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:42.474 [2024-10-09 14:07:48.991181] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:38:42.474 [2024-10-09 14:07:48.991190] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:42.474 [2024-10-09 14:07:48.991267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:42.474 [2024-10-09 14:07:48.991343] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:38:42.474 [2024-10-09 14:07:48.991355] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:38:42.474 [2024-10-09 14:07:48.991419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:42.474 pt2 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.474 14:07:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.474 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.733 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:42.733 "name": "raid_bdev1", 00:38:42.733 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:42.733 "strip_size_kb": 0, 00:38:42.733 "state": "online", 00:38:42.733 "raid_level": "raid1", 00:38:42.733 "superblock": true, 00:38:42.733 "num_base_bdevs": 2, 00:38:42.733 "num_base_bdevs_discovered": 1, 00:38:42.733 "num_base_bdevs_operational": 1, 00:38:42.733 "base_bdevs_list": [ 00:38:42.733 { 00:38:42.733 "name": null, 00:38:42.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:42.733 "is_configured": false, 00:38:42.733 "data_offset": 256, 00:38:42.733 "data_size": 7936 00:38:42.733 }, 00:38:42.733 { 00:38:42.733 "name": "pt2", 00:38:42.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:42.733 "is_configured": true, 00:38:42.733 "data_offset": 256, 00:38:42.733 "data_size": 7936 00:38:42.733 } 00:38:42.733 ] 00:38:42.733 }' 00:38:42.733 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:42.733 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.991 [2024-10-09 14:07:49.452464] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:42.991 [2024-10-09 14:07:49.452623] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:42.991 [2024-10-09 14:07:49.452775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:42.991 [2024-10-09 14:07:49.452856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:42.991 [2024-10-09 14:07:49.453116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.991 [2024-10-09 14:07:49.504439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:42.991 [2024-10-09 14:07:49.504500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:42.991 [2024-10-09 14:07:49.504524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:42.991 [2024-10-09 14:07:49.504542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:42.991 [2024-10-09 14:07:49.506949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:42.991 [2024-10-09 14:07:49.506989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:42.991 [2024-10-09 14:07:49.507041] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:42.991 [2024-10-09 14:07:49.507079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:42.991 [2024-10-09 14:07:49.507176] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:42.991 [2024-10-09 14:07:49.507191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:42.991 [2024-10-09 14:07:49.507213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:38:42.991 [2024-10-09 14:07:49.507245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:42.991 [2024-10-09 14:07:49.507303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:38:42.991 [2024-10-09 14:07:49.507319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:42.991 [2024-10-09 14:07:49.507391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:42.991 [2024-10-09 14:07:49.507468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:38:42.991 [2024-10-09 14:07:49.507477] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:38:42.991 [2024-10-09 14:07:49.507566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:42.991 pt1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:42.991 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.250 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:43.250 "name": "raid_bdev1", 00:38:43.250 "uuid": "27c61503-8e91-4d3b-84c8-67f437752472", 00:38:43.250 "strip_size_kb": 0, 00:38:43.250 "state": "online", 00:38:43.250 "raid_level": "raid1", 00:38:43.250 "superblock": true, 00:38:43.250 "num_base_bdevs": 2, 00:38:43.250 "num_base_bdevs_discovered": 1, 00:38:43.250 "num_base_bdevs_operational": 1, 00:38:43.250 "base_bdevs_list": [ 00:38:43.250 { 00:38:43.250 "name": null, 00:38:43.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.250 "is_configured": false, 00:38:43.250 "data_offset": 256, 00:38:43.250 "data_size": 7936 00:38:43.250 }, 00:38:43.250 { 00:38:43.250 "name": "pt2", 00:38:43.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:43.250 "is_configured": true, 00:38:43.250 "data_offset": 256, 00:38:43.250 "data_size": 7936 00:38:43.250 } 00:38:43.250 ] 00:38:43.250 }' 00:38:43.250 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:43.250 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:43.509 14:07:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:43.509 [2024-10-09 14:07:49.988826] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 27c61503-8e91-4d3b-84c8-67f437752472 '!=' 27c61503-8e91-4d3b-84c8-67f437752472 ']' 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98221 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98221 ']' 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98221 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.509 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98221 00:38:43.768 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:43.768 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:43.768 killing process with pid 98221 00:38:43.768 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98221' 00:38:43.768 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98221 00:38:43.768 [2024-10-09 14:07:50.079475] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:43.768 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98221 00:38:43.768 [2024-10-09 14:07:50.079585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:43.768 [2024-10-09 14:07:50.079656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:43.768 [2024-10-09 14:07:50.079669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:38:43.768 [2024-10-09 14:07:50.106264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:44.027 14:07:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:38:44.027 ************************************ 00:38:44.027 END TEST raid_superblock_test_md_separate 00:38:44.027 ************************************ 00:38:44.027 00:38:44.027 real 0m4.987s 00:38:44.027 user 0m8.222s 00:38:44.027 sys 0m1.069s 00:38:44.027 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:44.027 14:07:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:44.027 14:07:50 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:38:44.027 14:07:50 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:38:44.027 14:07:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:38:44.027 14:07:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:44.027 14:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:44.027 ************************************ 00:38:44.027 START TEST raid_rebuild_test_sb_md_separate 00:38:44.027 ************************************ 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:38:44.027 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98537 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98537 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98537 ']' 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:44.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:44.028 14:07:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:44.028 [2024-10-09 14:07:50.525855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:38:44.028 [2024-10-09 14:07:50.526221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98537 ] 00:38:44.028 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:44.028 Zero copy mechanism will not be used. 00:38:44.287 [2024-10-09 14:07:50.684388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.287 [2024-10-09 14:07:50.730736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.287 [2024-10-09 14:07:50.774388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:44.287 [2024-10-09 14:07:50.774625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.223 BaseBdev1_malloc 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.223 [2024-10-09 14:07:51.543264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:45.223 [2024-10-09 14:07:51.543340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.223 [2024-10-09 14:07:51.543376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:38:45.223 [2024-10-09 14:07:51.543396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.223 [2024-10-09 14:07:51.545737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.223 [2024-10-09 14:07:51.545906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:45.223 BaseBdev1 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.223 BaseBdev2_malloc 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.223 [2024-10-09 14:07:51.583072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:45.223 [2024-10-09 14:07:51.583132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.223 [2024-10-09 14:07:51.583159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:38:45.223 [2024-10-09 14:07:51.583173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.223 [2024-10-09 14:07:51.585759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.223 [2024-10-09 14:07:51.585798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:45.223 BaseBdev2 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.223 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.223 spare_malloc 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.224 spare_delay 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.224 [2024-10-09 14:07:51.624764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:45.224 [2024-10-09 14:07:51.624824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:45.224 [2024-10-09 14:07:51.624850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:38:45.224 [2024-10-09 14:07:51.624864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:45.224 [2024-10-09 14:07:51.627132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:45.224 [2024-10-09 14:07:51.627170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:45.224 spare 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.224 [2024-10-09 14:07:51.636792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:45.224 [2024-10-09 14:07:51.639071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:45.224 [2024-10-09 14:07:51.639224] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:38:45.224 [2024-10-09 14:07:51.639238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:45.224 [2024-10-09 14:07:51.639316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:38:45.224 [2024-10-09 14:07:51.639407] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:38:45.224 [2024-10-09 14:07:51.639418] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:38:45.224 [2024-10-09 14:07:51.639504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:45.224 "name": "raid_bdev1", 00:38:45.224 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:45.224 "strip_size_kb": 0, 00:38:45.224 "state": "online", 00:38:45.224 "raid_level": "raid1", 00:38:45.224 "superblock": true, 00:38:45.224 "num_base_bdevs": 2, 00:38:45.224 "num_base_bdevs_discovered": 2, 00:38:45.224 "num_base_bdevs_operational": 2, 00:38:45.224 "base_bdevs_list": [ 00:38:45.224 { 00:38:45.224 "name": "BaseBdev1", 00:38:45.224 "uuid": "48a567cf-7ed1-5b1d-8151-6f3ed21e09e5", 00:38:45.224 "is_configured": true, 00:38:45.224 "data_offset": 256, 00:38:45.224 "data_size": 7936 00:38:45.224 }, 00:38:45.224 { 00:38:45.224 "name": "BaseBdev2", 00:38:45.224 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:45.224 "is_configured": true, 00:38:45.224 "data_offset": 256, 00:38:45.224 "data_size": 7936 00:38:45.224 } 00:38:45.224 ] 00:38:45.224 }' 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:45.224 14:07:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:38:45.792 [2024-10-09 14:07:52.105221] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:45.792 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:38:46.051 [2024-10-09 14:07:52.461042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:38:46.051 /dev/nbd0 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:46.051 1+0 records in 00:38:46.051 1+0 records out 00:38:46.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324113 s, 12.6 MB/s 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:38:46.051 14:07:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:38:46.987 7936+0 records in 00:38:46.987 7936+0 records out 00:38:46.987 32505856 bytes (33 MB, 31 MiB) copied, 0.679928 s, 47.8 MB/s 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:46.987 [2024-10-09 14:07:53.514789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.987 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:46.987 [2024-10-09 14:07:53.534896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:47.246 "name": "raid_bdev1", 00:38:47.246 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:47.246 "strip_size_kb": 0, 00:38:47.246 "state": "online", 00:38:47.246 "raid_level": "raid1", 00:38:47.246 "superblock": true, 00:38:47.246 "num_base_bdevs": 2, 00:38:47.246 "num_base_bdevs_discovered": 1, 00:38:47.246 "num_base_bdevs_operational": 1, 00:38:47.246 "base_bdevs_list": [ 00:38:47.246 { 00:38:47.246 "name": null, 00:38:47.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.246 "is_configured": false, 00:38:47.246 "data_offset": 0, 00:38:47.246 "data_size": 7936 00:38:47.246 }, 00:38:47.246 { 00:38:47.246 "name": "BaseBdev2", 00:38:47.246 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:47.246 "is_configured": true, 00:38:47.246 "data_offset": 256, 00:38:47.246 "data_size": 7936 00:38:47.246 } 00:38:47.246 ] 00:38:47.246 }' 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:47.246 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:47.505 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:47.505 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:47.505 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:47.505 [2024-10-09 14:07:53.971056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:47.505 [2024-10-09 14:07:53.973150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:38:47.505 [2024-10-09 14:07:53.975870] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:47.505 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:47.505 14:07:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.440 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:48.699 14:07:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:48.699 "name": "raid_bdev1", 00:38:48.699 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:48.699 "strip_size_kb": 0, 00:38:48.699 "state": "online", 00:38:48.699 "raid_level": "raid1", 00:38:48.699 "superblock": true, 00:38:48.699 "num_base_bdevs": 2, 00:38:48.699 "num_base_bdevs_discovered": 2, 00:38:48.699 "num_base_bdevs_operational": 2, 00:38:48.699 "process": { 00:38:48.699 "type": "rebuild", 00:38:48.699 "target": "spare", 00:38:48.699 "progress": { 00:38:48.699 "blocks": 2560, 00:38:48.699 "percent": 32 00:38:48.699 } 00:38:48.699 }, 00:38:48.699 "base_bdevs_list": [ 00:38:48.699 { 00:38:48.699 "name": "spare", 00:38:48.699 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:48.699 "is_configured": true, 00:38:48.699 "data_offset": 256, 00:38:48.699 "data_size": 7936 00:38:48.699 }, 00:38:48.699 { 00:38:48.699 "name": "BaseBdev2", 00:38:48.699 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:48.699 "is_configured": true, 00:38:48.699 "data_offset": 256, 00:38:48.699 "data_size": 7936 00:38:48.699 } 00:38:48.699 ] 00:38:48.699 }' 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:48.699 [2024-10-09 14:07:55.132934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:48.699 [2024-10-09 14:07:55.183965] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:48.699 [2024-10-09 14:07:55.184027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:48.699 [2024-10-09 14:07:55.184049] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:48.699 [2024-10-09 14:07:55.184058] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:48.699 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:48.700 "name": "raid_bdev1", 00:38:48.700 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:48.700 "strip_size_kb": 0, 00:38:48.700 "state": "online", 00:38:48.700 "raid_level": "raid1", 00:38:48.700 "superblock": true, 00:38:48.700 "num_base_bdevs": 2, 00:38:48.700 "num_base_bdevs_discovered": 1, 00:38:48.700 "num_base_bdevs_operational": 1, 00:38:48.700 "base_bdevs_list": [ 00:38:48.700 { 00:38:48.700 "name": null, 00:38:48.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:48.700 "is_configured": false, 00:38:48.700 "data_offset": 0, 00:38:48.700 "data_size": 7936 00:38:48.700 }, 00:38:48.700 { 00:38:48.700 "name": "BaseBdev2", 00:38:48.700 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:48.700 "is_configured": true, 00:38:48.700 "data_offset": 256, 00:38:48.700 "data_size": 7936 00:38:48.700 } 00:38:48.700 ] 00:38:48.700 }' 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:48.700 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:49.268 "name": "raid_bdev1", 00:38:49.268 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:49.268 "strip_size_kb": 0, 00:38:49.268 "state": "online", 00:38:49.268 "raid_level": "raid1", 00:38:49.268 "superblock": true, 00:38:49.268 "num_base_bdevs": 2, 00:38:49.268 "num_base_bdevs_discovered": 1, 00:38:49.268 "num_base_bdevs_operational": 1, 00:38:49.268 "base_bdevs_list": [ 00:38:49.268 { 00:38:49.268 "name": null, 00:38:49.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:49.268 "is_configured": false, 00:38:49.268 "data_offset": 0, 00:38:49.268 "data_size": 7936 00:38:49.268 }, 00:38:49.268 { 00:38:49.268 "name": "BaseBdev2", 00:38:49.268 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:49.268 "is_configured": true, 00:38:49.268 "data_offset": 256, 00:38:49.268 "data_size": 7936 00:38:49.268 } 00:38:49.268 ] 00:38:49.268 }' 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:49.268 [2024-10-09 14:07:55.783561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:49.268 [2024-10-09 14:07:55.785348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:38:49.268 [2024-10-09 14:07:55.787659] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:49.268 14:07:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:50.644 "name": "raid_bdev1", 00:38:50.644 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:50.644 "strip_size_kb": 0, 00:38:50.644 "state": "online", 00:38:50.644 "raid_level": "raid1", 00:38:50.644 "superblock": true, 00:38:50.644 "num_base_bdevs": 2, 00:38:50.644 "num_base_bdevs_discovered": 2, 00:38:50.644 "num_base_bdevs_operational": 2, 00:38:50.644 "process": { 00:38:50.644 "type": "rebuild", 00:38:50.644 "target": "spare", 00:38:50.644 "progress": { 00:38:50.644 "blocks": 2560, 00:38:50.644 "percent": 32 00:38:50.644 } 00:38:50.644 }, 00:38:50.644 "base_bdevs_list": [ 00:38:50.644 { 00:38:50.644 "name": "spare", 00:38:50.644 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:50.644 "is_configured": true, 00:38:50.644 "data_offset": 256, 00:38:50.644 "data_size": 7936 00:38:50.644 }, 00:38:50.644 { 00:38:50.644 "name": "BaseBdev2", 00:38:50.644 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:50.644 "is_configured": true, 00:38:50.644 "data_offset": 256, 00:38:50.644 "data_size": 7936 00:38:50.644 } 00:38:50.644 ] 00:38:50.644 }' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:38:50.644 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:38:50.644 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:50.645 "name": "raid_bdev1", 00:38:50.645 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:50.645 "strip_size_kb": 0, 00:38:50.645 "state": "online", 00:38:50.645 "raid_level": "raid1", 00:38:50.645 "superblock": true, 00:38:50.645 "num_base_bdevs": 2, 00:38:50.645 "num_base_bdevs_discovered": 2, 00:38:50.645 "num_base_bdevs_operational": 2, 00:38:50.645 "process": { 00:38:50.645 "type": "rebuild", 00:38:50.645 "target": "spare", 00:38:50.645 "progress": { 00:38:50.645 "blocks": 2816, 00:38:50.645 "percent": 35 00:38:50.645 } 00:38:50.645 }, 00:38:50.645 "base_bdevs_list": [ 00:38:50.645 { 00:38:50.645 "name": "spare", 00:38:50.645 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:50.645 "is_configured": true, 00:38:50.645 "data_offset": 256, 00:38:50.645 "data_size": 7936 00:38:50.645 }, 00:38:50.645 { 00:38:50.645 "name": "BaseBdev2", 00:38:50.645 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:50.645 "is_configured": true, 00:38:50.645 "data_offset": 256, 00:38:50.645 "data_size": 7936 00:38:50.645 } 00:38:50.645 ] 00:38:50.645 }' 00:38:50.645 14:07:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:50.645 14:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:50.645 14:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:50.645 14:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:50.645 14:07:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:51.586 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:51.586 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:51.586 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:51.586 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:51.586 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:51.587 "name": "raid_bdev1", 00:38:51.587 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:51.587 "strip_size_kb": 0, 00:38:51.587 "state": "online", 00:38:51.587 "raid_level": "raid1", 00:38:51.587 "superblock": true, 00:38:51.587 "num_base_bdevs": 2, 00:38:51.587 "num_base_bdevs_discovered": 2, 00:38:51.587 "num_base_bdevs_operational": 2, 00:38:51.587 "process": { 00:38:51.587 "type": "rebuild", 00:38:51.587 "target": "spare", 00:38:51.587 "progress": { 00:38:51.587 "blocks": 5632, 00:38:51.587 "percent": 70 00:38:51.587 } 00:38:51.587 }, 00:38:51.587 "base_bdevs_list": [ 00:38:51.587 { 00:38:51.587 "name": "spare", 00:38:51.587 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:51.587 "is_configured": true, 00:38:51.587 "data_offset": 256, 00:38:51.587 "data_size": 7936 00:38:51.587 }, 00:38:51.587 { 00:38:51.587 "name": "BaseBdev2", 00:38:51.587 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:51.587 "is_configured": true, 00:38:51.587 "data_offset": 256, 00:38:51.587 "data_size": 7936 00:38:51.587 } 00:38:51.587 ] 00:38:51.587 }' 00:38:51.587 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:51.846 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:51.846 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:51.846 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:51.846 14:07:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:38:52.413 [2024-10-09 14:07:58.904966] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:52.413 [2024-10-09 14:07:58.905054] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:52.413 [2024-10-09 14:07:58.905150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:52.981 "name": "raid_bdev1", 00:38:52.981 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:52.981 "strip_size_kb": 0, 00:38:52.981 "state": "online", 00:38:52.981 "raid_level": "raid1", 00:38:52.981 "superblock": true, 00:38:52.981 "num_base_bdevs": 2, 00:38:52.981 "num_base_bdevs_discovered": 2, 00:38:52.981 "num_base_bdevs_operational": 2, 00:38:52.981 "base_bdevs_list": [ 00:38:52.981 { 00:38:52.981 "name": "spare", 00:38:52.981 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:52.981 "is_configured": true, 00:38:52.981 "data_offset": 256, 00:38:52.981 "data_size": 7936 00:38:52.981 }, 00:38:52.981 { 00:38:52.981 "name": "BaseBdev2", 00:38:52.981 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:52.981 "is_configured": true, 00:38:52.981 "data_offset": 256, 00:38:52.981 "data_size": 7936 00:38:52.981 } 00:38:52.981 ] 00:38:52.981 }' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:52.981 "name": "raid_bdev1", 00:38:52.981 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:52.981 "strip_size_kb": 0, 00:38:52.981 "state": "online", 00:38:52.981 "raid_level": "raid1", 00:38:52.981 "superblock": true, 00:38:52.981 "num_base_bdevs": 2, 00:38:52.981 "num_base_bdevs_discovered": 2, 00:38:52.981 "num_base_bdevs_operational": 2, 00:38:52.981 "base_bdevs_list": [ 00:38:52.981 { 00:38:52.981 "name": "spare", 00:38:52.981 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:52.981 "is_configured": true, 00:38:52.981 "data_offset": 256, 00:38:52.981 "data_size": 7936 00:38:52.981 }, 00:38:52.981 { 00:38:52.981 "name": "BaseBdev2", 00:38:52.981 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:52.981 "is_configured": true, 00:38:52.981 "data_offset": 256, 00:38:52.981 "data_size": 7936 00:38:52.981 } 00:38:52.981 ] 00:38:52.981 }' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.981 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.241 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:53.241 "name": "raid_bdev1", 00:38:53.241 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:53.241 "strip_size_kb": 0, 00:38:53.241 "state": "online", 00:38:53.241 "raid_level": "raid1", 00:38:53.241 "superblock": true, 00:38:53.241 "num_base_bdevs": 2, 00:38:53.241 "num_base_bdevs_discovered": 2, 00:38:53.241 "num_base_bdevs_operational": 2, 00:38:53.241 "base_bdevs_list": [ 00:38:53.241 { 00:38:53.241 "name": "spare", 00:38:53.241 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:53.241 "is_configured": true, 00:38:53.241 "data_offset": 256, 00:38:53.241 "data_size": 7936 00:38:53.241 }, 00:38:53.241 { 00:38:53.241 "name": "BaseBdev2", 00:38:53.241 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:53.241 "is_configured": true, 00:38:53.241 "data_offset": 256, 00:38:53.241 "data_size": 7936 00:38:53.241 } 00:38:53.241 ] 00:38:53.241 }' 00:38:53.241 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:53.241 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:53.500 [2024-10-09 14:07:59.964235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:53.500 [2024-10-09 14:07:59.964268] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:53.500 [2024-10-09 14:07:59.964354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:53.500 [2024-10-09 14:07:59.964424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:53.500 [2024-10-09 14:07:59.964438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:38:53.500 14:07:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:53.500 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:53.501 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:38:53.501 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:53.501 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:53.501 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:38:53.759 /dev/nbd0 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:54.018 1+0 records in 00:38:54.018 1+0 records out 00:38:54.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276925 s, 14.8 MB/s 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:38:54.018 /dev/nbd1 00:38:54.018 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:38:54.277 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:38:54.277 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:38:54.277 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:54.278 1+0 records in 00:38:54.278 1+0 records out 00:38:54.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321809 s, 12.7 MB/s 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:54.278 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:54.536 14:08:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:54.795 [2024-10-09 14:08:01.147869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:54.795 [2024-10-09 14:08:01.147928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:54.795 [2024-10-09 14:08:01.147951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:54.795 [2024-10-09 14:08:01.147968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:54.795 [2024-10-09 14:08:01.150564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:54.795 [2024-10-09 14:08:01.150605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:54.795 [2024-10-09 14:08:01.150668] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:54.795 [2024-10-09 14:08:01.150714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:54.795 [2024-10-09 14:08:01.150852] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:54.795 spare 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.795 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:54.795 [2024-10-09 14:08:01.250951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:38:54.796 [2024-10-09 14:08:01.250984] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:38:54.796 [2024-10-09 14:08:01.251114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:38:54.796 [2024-10-09 14:08:01.251245] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:38:54.796 [2024-10-09 14:08:01.251262] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:38:54.796 [2024-10-09 14:08:01.251385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:54.796 "name": "raid_bdev1", 00:38:54.796 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:54.796 "strip_size_kb": 0, 00:38:54.796 "state": "online", 00:38:54.796 "raid_level": "raid1", 00:38:54.796 "superblock": true, 00:38:54.796 "num_base_bdevs": 2, 00:38:54.796 "num_base_bdevs_discovered": 2, 00:38:54.796 "num_base_bdevs_operational": 2, 00:38:54.796 "base_bdevs_list": [ 00:38:54.796 { 00:38:54.796 "name": "spare", 00:38:54.796 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:54.796 "is_configured": true, 00:38:54.796 "data_offset": 256, 00:38:54.796 "data_size": 7936 00:38:54.796 }, 00:38:54.796 { 00:38:54.796 "name": "BaseBdev2", 00:38:54.796 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:54.796 "is_configured": true, 00:38:54.796 "data_offset": 256, 00:38:54.796 "data_size": 7936 00:38:54.796 } 00:38:54.796 ] 00:38:54.796 }' 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:54.796 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:55.364 "name": "raid_bdev1", 00:38:55.364 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:55.364 "strip_size_kb": 0, 00:38:55.364 "state": "online", 00:38:55.364 "raid_level": "raid1", 00:38:55.364 "superblock": true, 00:38:55.364 "num_base_bdevs": 2, 00:38:55.364 "num_base_bdevs_discovered": 2, 00:38:55.364 "num_base_bdevs_operational": 2, 00:38:55.364 "base_bdevs_list": [ 00:38:55.364 { 00:38:55.364 "name": "spare", 00:38:55.364 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:55.364 "is_configured": true, 00:38:55.364 "data_offset": 256, 00:38:55.364 "data_size": 7936 00:38:55.364 }, 00:38:55.364 { 00:38:55.364 "name": "BaseBdev2", 00:38:55.364 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:55.364 "is_configured": true, 00:38:55.364 "data_offset": 256, 00:38:55.364 "data_size": 7936 00:38:55.364 } 00:38:55.364 ] 00:38:55.364 }' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.364 [2024-10-09 14:08:01.860099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.364 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.624 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:55.624 "name": "raid_bdev1", 00:38:55.624 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:55.624 "strip_size_kb": 0, 00:38:55.624 "state": "online", 00:38:55.624 "raid_level": "raid1", 00:38:55.624 "superblock": true, 00:38:55.624 "num_base_bdevs": 2, 00:38:55.624 "num_base_bdevs_discovered": 1, 00:38:55.624 "num_base_bdevs_operational": 1, 00:38:55.624 "base_bdevs_list": [ 00:38:55.624 { 00:38:55.624 "name": null, 00:38:55.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:55.624 "is_configured": false, 00:38:55.624 "data_offset": 0, 00:38:55.624 "data_size": 7936 00:38:55.624 }, 00:38:55.624 { 00:38:55.624 "name": "BaseBdev2", 00:38:55.624 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:55.624 "is_configured": true, 00:38:55.624 "data_offset": 256, 00:38:55.624 "data_size": 7936 00:38:55.624 } 00:38:55.624 ] 00:38:55.624 }' 00:38:55.624 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:55.624 14:08:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.882 14:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:38:55.882 14:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:55.882 14:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:55.882 [2024-10-09 14:08:02.320229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:55.882 [2024-10-09 14:08:02.320432] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:55.882 [2024-10-09 14:08:02.320465] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:55.882 [2024-10-09 14:08:02.320512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:55.882 [2024-10-09 14:08:02.322317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:38:55.882 [2024-10-09 14:08:02.324794] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:55.882 14:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:55.882 14:08:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:56.818 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:57.077 "name": "raid_bdev1", 00:38:57.077 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:57.077 "strip_size_kb": 0, 00:38:57.077 "state": "online", 00:38:57.077 "raid_level": "raid1", 00:38:57.077 "superblock": true, 00:38:57.077 "num_base_bdevs": 2, 00:38:57.077 "num_base_bdevs_discovered": 2, 00:38:57.077 "num_base_bdevs_operational": 2, 00:38:57.077 "process": { 00:38:57.077 "type": "rebuild", 00:38:57.077 "target": "spare", 00:38:57.077 "progress": { 00:38:57.077 "blocks": 2560, 00:38:57.077 "percent": 32 00:38:57.077 } 00:38:57.077 }, 00:38:57.077 "base_bdevs_list": [ 00:38:57.077 { 00:38:57.077 "name": "spare", 00:38:57.077 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:57.077 "is_configured": true, 00:38:57.077 "data_offset": 256, 00:38:57.077 "data_size": 7936 00:38:57.077 }, 00:38:57.077 { 00:38:57.077 "name": "BaseBdev2", 00:38:57.077 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:57.077 "is_configured": true, 00:38:57.077 "data_offset": 256, 00:38:57.077 "data_size": 7936 00:38:57.077 } 00:38:57.077 ] 00:38:57.077 }' 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:57.077 [2024-10-09 14:08:03.478095] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:57.077 [2024-10-09 14:08:03.532222] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:57.077 [2024-10-09 14:08:03.532291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:57.077 [2024-10-09 14:08:03.532311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:57.077 [2024-10-09 14:08:03.532320] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:57.077 "name": "raid_bdev1", 00:38:57.077 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:57.077 "strip_size_kb": 0, 00:38:57.077 "state": "online", 00:38:57.077 "raid_level": "raid1", 00:38:57.077 "superblock": true, 00:38:57.077 "num_base_bdevs": 2, 00:38:57.077 "num_base_bdevs_discovered": 1, 00:38:57.077 "num_base_bdevs_operational": 1, 00:38:57.077 "base_bdevs_list": [ 00:38:57.077 { 00:38:57.077 "name": null, 00:38:57.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:57.077 "is_configured": false, 00:38:57.077 "data_offset": 0, 00:38:57.077 "data_size": 7936 00:38:57.077 }, 00:38:57.077 { 00:38:57.077 "name": "BaseBdev2", 00:38:57.077 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:57.077 "is_configured": true, 00:38:57.077 "data_offset": 256, 00:38:57.077 "data_size": 7936 00:38:57.077 } 00:38:57.077 ] 00:38:57.077 }' 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:57.077 14:08:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:57.644 14:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:38:57.644 14:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:57.644 14:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:57.644 [2024-10-09 14:08:04.011588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:57.645 [2024-10-09 14:08:04.011654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:57.645 [2024-10-09 14:08:04.011686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:57.645 [2024-10-09 14:08:04.011698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:57.645 [2024-10-09 14:08:04.011924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:57.645 [2024-10-09 14:08:04.011941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:57.645 [2024-10-09 14:08:04.012012] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:57.645 [2024-10-09 14:08:04.012025] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:57.645 [2024-10-09 14:08:04.012043] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:57.645 [2024-10-09 14:08:04.012072] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:57.645 [2024-10-09 14:08:04.013756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:38:57.645 [2024-10-09 14:08:04.016099] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:57.645 spare 00:38:57.645 14:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:57.645 14:08:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:58.585 "name": "raid_bdev1", 00:38:58.585 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:58.585 "strip_size_kb": 0, 00:38:58.585 "state": "online", 00:38:58.585 "raid_level": "raid1", 00:38:58.585 "superblock": true, 00:38:58.585 "num_base_bdevs": 2, 00:38:58.585 "num_base_bdevs_discovered": 2, 00:38:58.585 "num_base_bdevs_operational": 2, 00:38:58.585 "process": { 00:38:58.585 "type": "rebuild", 00:38:58.585 "target": "spare", 00:38:58.585 "progress": { 00:38:58.585 "blocks": 2560, 00:38:58.585 "percent": 32 00:38:58.585 } 00:38:58.585 }, 00:38:58.585 "base_bdevs_list": [ 00:38:58.585 { 00:38:58.585 "name": "spare", 00:38:58.585 "uuid": "d276ace5-f4e6-5dc9-a8f5-a482a5adf1a7", 00:38:58.585 "is_configured": true, 00:38:58.585 "data_offset": 256, 00:38:58.585 "data_size": 7936 00:38:58.585 }, 00:38:58.585 { 00:38:58.585 "name": "BaseBdev2", 00:38:58.585 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:58.585 "is_configured": true, 00:38:58.585 "data_offset": 256, 00:38:58.585 "data_size": 7936 00:38:58.585 } 00:38:58.585 ] 00:38:58.585 }' 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:58.585 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:58.861 [2024-10-09 14:08:05.148973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:58.861 [2024-10-09 14:08:05.222997] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:58.861 [2024-10-09 14:08:05.223197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:58.861 [2024-10-09 14:08:05.223219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:58.861 [2024-10-09 14:08:05.223232] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:38:58.861 "name": "raid_bdev1", 00:38:58.861 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:58.861 "strip_size_kb": 0, 00:38:58.861 "state": "online", 00:38:58.861 "raid_level": "raid1", 00:38:58.861 "superblock": true, 00:38:58.861 "num_base_bdevs": 2, 00:38:58.861 "num_base_bdevs_discovered": 1, 00:38:58.861 "num_base_bdevs_operational": 1, 00:38:58.861 "base_bdevs_list": [ 00:38:58.861 { 00:38:58.861 "name": null, 00:38:58.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:58.861 "is_configured": false, 00:38:58.861 "data_offset": 0, 00:38:58.861 "data_size": 7936 00:38:58.861 }, 00:38:58.861 { 00:38:58.861 "name": "BaseBdev2", 00:38:58.861 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:58.861 "is_configured": true, 00:38:58.861 "data_offset": 256, 00:38:58.861 "data_size": 7936 00:38:58.861 } 00:38:58.861 ] 00:38:58.861 }' 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:38:58.861 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:59.120 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:38:59.444 "name": "raid_bdev1", 00:38:59.444 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:38:59.444 "strip_size_kb": 0, 00:38:59.444 "state": "online", 00:38:59.444 "raid_level": "raid1", 00:38:59.444 "superblock": true, 00:38:59.444 "num_base_bdevs": 2, 00:38:59.444 "num_base_bdevs_discovered": 1, 00:38:59.444 "num_base_bdevs_operational": 1, 00:38:59.444 "base_bdevs_list": [ 00:38:59.444 { 00:38:59.444 "name": null, 00:38:59.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:59.444 "is_configured": false, 00:38:59.444 "data_offset": 0, 00:38:59.444 "data_size": 7936 00:38:59.444 }, 00:38:59.444 { 00:38:59.444 "name": "BaseBdev2", 00:38:59.444 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:38:59.444 "is_configured": true, 00:38:59.444 "data_offset": 256, 00:38:59.444 "data_size": 7936 00:38:59.444 } 00:38:59.444 ] 00:38:59.444 }' 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:59.444 [2024-10-09 14:08:05.806361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:59.444 [2024-10-09 14:08:05.806424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:59.444 [2024-10-09 14:08:05.806465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:38:59.444 [2024-10-09 14:08:05.806480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:59.444 [2024-10-09 14:08:05.806698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:59.444 [2024-10-09 14:08:05.806720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:59.444 [2024-10-09 14:08:05.806784] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:59.444 [2024-10-09 14:08:05.806803] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:59.444 [2024-10-09 14:08:05.806816] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:59.444 [2024-10-09 14:08:05.806830] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:38:59.444 BaseBdev1 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:59.444 14:08:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:00.380 "name": "raid_bdev1", 00:39:00.380 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:39:00.380 "strip_size_kb": 0, 00:39:00.380 "state": "online", 00:39:00.380 "raid_level": "raid1", 00:39:00.380 "superblock": true, 00:39:00.380 "num_base_bdevs": 2, 00:39:00.380 "num_base_bdevs_discovered": 1, 00:39:00.380 "num_base_bdevs_operational": 1, 00:39:00.380 "base_bdevs_list": [ 00:39:00.380 { 00:39:00.380 "name": null, 00:39:00.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.380 "is_configured": false, 00:39:00.380 "data_offset": 0, 00:39:00.380 "data_size": 7936 00:39:00.380 }, 00:39:00.380 { 00:39:00.380 "name": "BaseBdev2", 00:39:00.380 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:39:00.380 "is_configured": true, 00:39:00.380 "data_offset": 256, 00:39:00.380 "data_size": 7936 00:39:00.380 } 00:39:00.380 ] 00:39:00.380 }' 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:00.380 14:08:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:00.948 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:00.948 "name": "raid_bdev1", 00:39:00.948 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:39:00.948 "strip_size_kb": 0, 00:39:00.948 "state": "online", 00:39:00.948 "raid_level": "raid1", 00:39:00.948 "superblock": true, 00:39:00.948 "num_base_bdevs": 2, 00:39:00.948 "num_base_bdevs_discovered": 1, 00:39:00.948 "num_base_bdevs_operational": 1, 00:39:00.948 "base_bdevs_list": [ 00:39:00.948 { 00:39:00.948 "name": null, 00:39:00.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:00.948 "is_configured": false, 00:39:00.948 "data_offset": 0, 00:39:00.948 "data_size": 7936 00:39:00.949 }, 00:39:00.949 { 00:39:00.949 "name": "BaseBdev2", 00:39:00.949 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:39:00.949 "is_configured": true, 00:39:00.949 "data_offset": 256, 00:39:00.949 "data_size": 7936 00:39:00.949 } 00:39:00.949 ] 00:39:00.949 }' 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:00.949 [2024-10-09 14:08:07.434772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:00.949 [2024-10-09 14:08:07.435106] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:00.949 [2024-10-09 14:08:07.435131] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:00.949 request: 00:39:00.949 { 00:39:00.949 "base_bdev": "BaseBdev1", 00:39:00.949 "raid_bdev": "raid_bdev1", 00:39:00.949 "method": "bdev_raid_add_base_bdev", 00:39:00.949 "req_id": 1 00:39:00.949 } 00:39:00.949 Got JSON-RPC error response 00:39:00.949 response: 00:39:00.949 { 00:39:00.949 "code": -22, 00:39:00.949 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:00.949 } 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:00.949 14:08:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:02.330 "name": "raid_bdev1", 00:39:02.330 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:39:02.330 "strip_size_kb": 0, 00:39:02.330 "state": "online", 00:39:02.330 "raid_level": "raid1", 00:39:02.330 "superblock": true, 00:39:02.330 "num_base_bdevs": 2, 00:39:02.330 "num_base_bdevs_discovered": 1, 00:39:02.330 "num_base_bdevs_operational": 1, 00:39:02.330 "base_bdevs_list": [ 00:39:02.330 { 00:39:02.330 "name": null, 00:39:02.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.330 "is_configured": false, 00:39:02.330 "data_offset": 0, 00:39:02.330 "data_size": 7936 00:39:02.330 }, 00:39:02.330 { 00:39:02.330 "name": "BaseBdev2", 00:39:02.330 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:39:02.330 "is_configured": true, 00:39:02.330 "data_offset": 256, 00:39:02.330 "data_size": 7936 00:39:02.330 } 00:39:02.330 ] 00:39:02.330 }' 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:02.330 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:02.331 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:02.331 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.331 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:02.331 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:02.331 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:02.590 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:02.590 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:02.590 "name": "raid_bdev1", 00:39:02.590 "uuid": "04c5c08a-6b73-468c-9e51-55c76b128621", 00:39:02.590 "strip_size_kb": 0, 00:39:02.590 "state": "online", 00:39:02.590 "raid_level": "raid1", 00:39:02.590 "superblock": true, 00:39:02.590 "num_base_bdevs": 2, 00:39:02.590 "num_base_bdevs_discovered": 1, 00:39:02.590 "num_base_bdevs_operational": 1, 00:39:02.590 "base_bdevs_list": [ 00:39:02.590 { 00:39:02.590 "name": null, 00:39:02.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.590 "is_configured": false, 00:39:02.590 "data_offset": 0, 00:39:02.590 "data_size": 7936 00:39:02.590 }, 00:39:02.590 { 00:39:02.590 "name": "BaseBdev2", 00:39:02.590 "uuid": "679e0a01-1c6e-5e2b-95ff-d185f8d14316", 00:39:02.590 "is_configured": true, 00:39:02.590 "data_offset": 256, 00:39:02.590 "data_size": 7936 00:39:02.590 } 00:39:02.590 ] 00:39:02.590 }' 00:39:02.590 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:02.590 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:02.590 14:08:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98537 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98537 ']' 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98537 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98537 00:39:02.590 killing process with pid 98537 00:39:02.590 Received shutdown signal, test time was about 60.000000 seconds 00:39:02.590 00:39:02.590 Latency(us) 00:39:02.590 [2024-10-09T14:08:09.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.590 [2024-10-09T14:08:09.141Z] =================================================================================================================== 00:39:02.590 [2024-10-09T14:08:09.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98537' 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98537 00:39:02.590 [2024-10-09 14:08:09.053889] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:02.590 [2024-10-09 14:08:09.054023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:02.590 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98537 00:39:02.590 [2024-10-09 14:08:09.054078] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:02.590 [2024-10-09 14:08:09.054091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:39:02.590 [2024-10-09 14:08:09.088073] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:02.849 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:39:02.849 00:39:02.849 real 0m18.920s 00:39:02.849 user 0m25.325s 00:39:02.849 sys 0m2.807s 00:39:02.849 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:02.849 ************************************ 00:39:02.849 END TEST raid_rebuild_test_sb_md_separate 00:39:02.849 ************************************ 00:39:02.849 14:08:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:39:02.849 14:08:09 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:39:02.849 14:08:09 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:39:02.849 14:08:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:39:02.849 14:08:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:02.849 14:08:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:02.849 ************************************ 00:39:02.849 START TEST raid_state_function_test_sb_md_interleaved 00:39:02.849 ************************************ 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:39:02.849 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:39:02.850 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:02.850 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:39:02.850 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:39:02.850 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:39:02.850 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:39:03.108 Process raid pid: 99222 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99222 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99222' 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99222 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99222 ']' 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:03.108 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:03.109 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:03.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:03.109 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:03.109 14:08:09 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:03.109 [2024-10-09 14:08:09.520276] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:03.109 [2024-10-09 14:08:09.520735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:03.367 [2024-10-09 14:08:09.708157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:03.367 [2024-10-09 14:08:09.752675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.367 [2024-10-09 14:08:09.796759] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:03.367 [2024-10-09 14:08:09.796984] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:03.935 [2024-10-09 14:08:10.376119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:03.935 [2024-10-09 14:08:10.376171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:03.935 [2024-10-09 14:08:10.376215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:03.935 [2024-10-09 14:08:10.376230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:03.935 "name": "Existed_Raid", 00:39:03.935 "uuid": "d5898b86-570b-412e-b4dd-b929c20e0640", 00:39:03.935 "strip_size_kb": 0, 00:39:03.935 "state": "configuring", 00:39:03.935 "raid_level": "raid1", 00:39:03.935 "superblock": true, 00:39:03.935 "num_base_bdevs": 2, 00:39:03.935 "num_base_bdevs_discovered": 0, 00:39:03.935 "num_base_bdevs_operational": 2, 00:39:03.935 "base_bdevs_list": [ 00:39:03.935 { 00:39:03.935 "name": "BaseBdev1", 00:39:03.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.935 "is_configured": false, 00:39:03.935 "data_offset": 0, 00:39:03.935 "data_size": 0 00:39:03.935 }, 00:39:03.935 { 00:39:03.935 "name": "BaseBdev2", 00:39:03.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:03.935 "is_configured": false, 00:39:03.935 "data_offset": 0, 00:39:03.935 "data_size": 0 00:39:03.935 } 00:39:03.935 ] 00:39:03.935 }' 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:03.935 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 [2024-10-09 14:08:10.840122] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:04.504 [2024-10-09 14:08:10.840315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 [2024-10-09 14:08:10.852162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:39:04.504 [2024-10-09 14:08:10.852206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:39:04.504 [2024-10-09 14:08:10.852216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:04.504 [2024-10-09 14:08:10.852229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 [2024-10-09 14:08:10.873458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:04.504 BaseBdev1 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.504 [ 00:39:04.504 { 00:39:04.504 "name": "BaseBdev1", 00:39:04.504 "aliases": [ 00:39:04.504 "3fc9400a-4dcd-4e94-b0d5-3d6092445414" 00:39:04.504 ], 00:39:04.504 "product_name": "Malloc disk", 00:39:04.504 "block_size": 4128, 00:39:04.504 "num_blocks": 8192, 00:39:04.504 "uuid": "3fc9400a-4dcd-4e94-b0d5-3d6092445414", 00:39:04.504 "md_size": 32, 00:39:04.504 "md_interleave": true, 00:39:04.504 "dif_type": 0, 00:39:04.504 "assigned_rate_limits": { 00:39:04.504 "rw_ios_per_sec": 0, 00:39:04.504 "rw_mbytes_per_sec": 0, 00:39:04.504 "r_mbytes_per_sec": 0, 00:39:04.504 "w_mbytes_per_sec": 0 00:39:04.504 }, 00:39:04.504 "claimed": true, 00:39:04.504 "claim_type": "exclusive_write", 00:39:04.504 "zoned": false, 00:39:04.504 "supported_io_types": { 00:39:04.504 "read": true, 00:39:04.504 "write": true, 00:39:04.504 "unmap": true, 00:39:04.504 "flush": true, 00:39:04.504 "reset": true, 00:39:04.504 "nvme_admin": false, 00:39:04.504 "nvme_io": false, 00:39:04.504 "nvme_io_md": false, 00:39:04.504 "write_zeroes": true, 00:39:04.504 "zcopy": true, 00:39:04.504 "get_zone_info": false, 00:39:04.504 "zone_management": false, 00:39:04.504 "zone_append": false, 00:39:04.504 "compare": false, 00:39:04.504 "compare_and_write": false, 00:39:04.504 "abort": true, 00:39:04.504 "seek_hole": false, 00:39:04.504 "seek_data": false, 00:39:04.504 "copy": true, 00:39:04.504 "nvme_iov_md": false 00:39:04.504 }, 00:39:04.504 "memory_domains": [ 00:39:04.504 { 00:39:04.504 "dma_device_id": "system", 00:39:04.504 "dma_device_type": 1 00:39:04.504 }, 00:39:04.504 { 00:39:04.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:04.504 "dma_device_type": 2 00:39:04.504 } 00:39:04.504 ], 00:39:04.504 "driver_specific": {} 00:39:04.504 } 00:39:04.504 ] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.504 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:04.505 "name": "Existed_Raid", 00:39:04.505 "uuid": "5b554fe3-c530-45e1-9552-6c1745d0d3ab", 00:39:04.505 "strip_size_kb": 0, 00:39:04.505 "state": "configuring", 00:39:04.505 "raid_level": "raid1", 00:39:04.505 "superblock": true, 00:39:04.505 "num_base_bdevs": 2, 00:39:04.505 "num_base_bdevs_discovered": 1, 00:39:04.505 "num_base_bdevs_operational": 2, 00:39:04.505 "base_bdevs_list": [ 00:39:04.505 { 00:39:04.505 "name": "BaseBdev1", 00:39:04.505 "uuid": "3fc9400a-4dcd-4e94-b0d5-3d6092445414", 00:39:04.505 "is_configured": true, 00:39:04.505 "data_offset": 256, 00:39:04.505 "data_size": 7936 00:39:04.505 }, 00:39:04.505 { 00:39:04.505 "name": "BaseBdev2", 00:39:04.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.505 "is_configured": false, 00:39:04.505 "data_offset": 0, 00:39:04.505 "data_size": 0 00:39:04.505 } 00:39:04.505 ] 00:39:04.505 }' 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:04.505 14:08:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.073 [2024-10-09 14:08:11.345625] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:39:05.073 [2024-10-09 14:08:11.345819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.073 [2024-10-09 14:08:11.357734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:05.073 [2024-10-09 14:08:11.360110] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:39:05.073 [2024-10-09 14:08:11.360157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:05.073 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:05.074 "name": "Existed_Raid", 00:39:05.074 "uuid": "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5", 00:39:05.074 "strip_size_kb": 0, 00:39:05.074 "state": "configuring", 00:39:05.074 "raid_level": "raid1", 00:39:05.074 "superblock": true, 00:39:05.074 "num_base_bdevs": 2, 00:39:05.074 "num_base_bdevs_discovered": 1, 00:39:05.074 "num_base_bdevs_operational": 2, 00:39:05.074 "base_bdevs_list": [ 00:39:05.074 { 00:39:05.074 "name": "BaseBdev1", 00:39:05.074 "uuid": "3fc9400a-4dcd-4e94-b0d5-3d6092445414", 00:39:05.074 "is_configured": true, 00:39:05.074 "data_offset": 256, 00:39:05.074 "data_size": 7936 00:39:05.074 }, 00:39:05.074 { 00:39:05.074 "name": "BaseBdev2", 00:39:05.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.074 "is_configured": false, 00:39:05.074 "data_offset": 0, 00:39:05.074 "data_size": 0 00:39:05.074 } 00:39:05.074 ] 00:39:05.074 }' 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:05.074 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.333 [2024-10-09 14:08:11.830576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:05.333 [2024-10-09 14:08:11.830799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:39:05.333 [2024-10-09 14:08:11.830819] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:05.333 [2024-10-09 14:08:11.830982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:05.333 BaseBdev2 00:39:05.333 [2024-10-09 14:08:11.831065] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:39:05.333 [2024-10-09 14:08:11.831089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:39:05.333 [2024-10-09 14:08:11.831196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.333 [ 00:39:05.333 { 00:39:05.333 "name": "BaseBdev2", 00:39:05.333 "aliases": [ 00:39:05.333 "6301da70-4f0d-4a9a-bd36-f4f037ecb1f9" 00:39:05.333 ], 00:39:05.333 "product_name": "Malloc disk", 00:39:05.333 "block_size": 4128, 00:39:05.333 "num_blocks": 8192, 00:39:05.333 "uuid": "6301da70-4f0d-4a9a-bd36-f4f037ecb1f9", 00:39:05.333 "md_size": 32, 00:39:05.333 "md_interleave": true, 00:39:05.333 "dif_type": 0, 00:39:05.333 "assigned_rate_limits": { 00:39:05.333 "rw_ios_per_sec": 0, 00:39:05.333 "rw_mbytes_per_sec": 0, 00:39:05.333 "r_mbytes_per_sec": 0, 00:39:05.333 "w_mbytes_per_sec": 0 00:39:05.333 }, 00:39:05.333 "claimed": true, 00:39:05.333 "claim_type": "exclusive_write", 00:39:05.333 "zoned": false, 00:39:05.333 "supported_io_types": { 00:39:05.333 "read": true, 00:39:05.333 "write": true, 00:39:05.333 "unmap": true, 00:39:05.333 "flush": true, 00:39:05.333 "reset": true, 00:39:05.333 "nvme_admin": false, 00:39:05.333 "nvme_io": false, 00:39:05.333 "nvme_io_md": false, 00:39:05.333 "write_zeroes": true, 00:39:05.333 "zcopy": true, 00:39:05.333 "get_zone_info": false, 00:39:05.333 "zone_management": false, 00:39:05.333 "zone_append": false, 00:39:05.333 "compare": false, 00:39:05.333 "compare_and_write": false, 00:39:05.333 "abort": true, 00:39:05.333 "seek_hole": false, 00:39:05.333 "seek_data": false, 00:39:05.333 "copy": true, 00:39:05.333 "nvme_iov_md": false 00:39:05.333 }, 00:39:05.333 "memory_domains": [ 00:39:05.333 { 00:39:05.333 "dma_device_id": "system", 00:39:05.333 "dma_device_type": 1 00:39:05.333 }, 00:39:05.333 { 00:39:05.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.333 "dma_device_type": 2 00:39:05.333 } 00:39:05.333 ], 00:39:05.333 "driver_specific": {} 00:39:05.333 } 00:39:05.333 ] 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.333 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.334 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:05.593 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.593 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:05.593 "name": "Existed_Raid", 00:39:05.593 "uuid": "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5", 00:39:05.593 "strip_size_kb": 0, 00:39:05.593 "state": "online", 00:39:05.593 "raid_level": "raid1", 00:39:05.593 "superblock": true, 00:39:05.593 "num_base_bdevs": 2, 00:39:05.593 "num_base_bdevs_discovered": 2, 00:39:05.593 "num_base_bdevs_operational": 2, 00:39:05.593 "base_bdevs_list": [ 00:39:05.593 { 00:39:05.593 "name": "BaseBdev1", 00:39:05.593 "uuid": "3fc9400a-4dcd-4e94-b0d5-3d6092445414", 00:39:05.593 "is_configured": true, 00:39:05.593 "data_offset": 256, 00:39:05.593 "data_size": 7936 00:39:05.593 }, 00:39:05.593 { 00:39:05.593 "name": "BaseBdev2", 00:39:05.593 "uuid": "6301da70-4f0d-4a9a-bd36-f4f037ecb1f9", 00:39:05.593 "is_configured": true, 00:39:05.593 "data_offset": 256, 00:39:05.593 "data_size": 7936 00:39:05.593 } 00:39:05.593 ] 00:39:05.593 }' 00:39:05.593 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:05.593 14:08:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:05.852 [2024-10-09 14:08:12.319041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:05.852 "name": "Existed_Raid", 00:39:05.852 "aliases": [ 00:39:05.852 "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5" 00:39:05.852 ], 00:39:05.852 "product_name": "Raid Volume", 00:39:05.852 "block_size": 4128, 00:39:05.852 "num_blocks": 7936, 00:39:05.852 "uuid": "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5", 00:39:05.852 "md_size": 32, 00:39:05.852 "md_interleave": true, 00:39:05.852 "dif_type": 0, 00:39:05.852 "assigned_rate_limits": { 00:39:05.852 "rw_ios_per_sec": 0, 00:39:05.852 "rw_mbytes_per_sec": 0, 00:39:05.852 "r_mbytes_per_sec": 0, 00:39:05.852 "w_mbytes_per_sec": 0 00:39:05.852 }, 00:39:05.852 "claimed": false, 00:39:05.852 "zoned": false, 00:39:05.852 "supported_io_types": { 00:39:05.852 "read": true, 00:39:05.852 "write": true, 00:39:05.852 "unmap": false, 00:39:05.852 "flush": false, 00:39:05.852 "reset": true, 00:39:05.852 "nvme_admin": false, 00:39:05.852 "nvme_io": false, 00:39:05.852 "nvme_io_md": false, 00:39:05.852 "write_zeroes": true, 00:39:05.852 "zcopy": false, 00:39:05.852 "get_zone_info": false, 00:39:05.852 "zone_management": false, 00:39:05.852 "zone_append": false, 00:39:05.852 "compare": false, 00:39:05.852 "compare_and_write": false, 00:39:05.852 "abort": false, 00:39:05.852 "seek_hole": false, 00:39:05.852 "seek_data": false, 00:39:05.852 "copy": false, 00:39:05.852 "nvme_iov_md": false 00:39:05.852 }, 00:39:05.852 "memory_domains": [ 00:39:05.852 { 00:39:05.852 "dma_device_id": "system", 00:39:05.852 "dma_device_type": 1 00:39:05.852 }, 00:39:05.852 { 00:39:05.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.852 "dma_device_type": 2 00:39:05.852 }, 00:39:05.852 { 00:39:05.852 "dma_device_id": "system", 00:39:05.852 "dma_device_type": 1 00:39:05.852 }, 00:39:05.852 { 00:39:05.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:05.852 "dma_device_type": 2 00:39:05.852 } 00:39:05.852 ], 00:39:05.852 "driver_specific": { 00:39:05.852 "raid": { 00:39:05.852 "uuid": "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5", 00:39:05.852 "strip_size_kb": 0, 00:39:05.852 "state": "online", 00:39:05.852 "raid_level": "raid1", 00:39:05.852 "superblock": true, 00:39:05.852 "num_base_bdevs": 2, 00:39:05.852 "num_base_bdevs_discovered": 2, 00:39:05.852 "num_base_bdevs_operational": 2, 00:39:05.852 "base_bdevs_list": [ 00:39:05.852 { 00:39:05.852 "name": "BaseBdev1", 00:39:05.852 "uuid": "3fc9400a-4dcd-4e94-b0d5-3d6092445414", 00:39:05.852 "is_configured": true, 00:39:05.852 "data_offset": 256, 00:39:05.852 "data_size": 7936 00:39:05.852 }, 00:39:05.852 { 00:39:05.852 "name": "BaseBdev2", 00:39:05.852 "uuid": "6301da70-4f0d-4a9a-bd36-f4f037ecb1f9", 00:39:05.852 "is_configured": true, 00:39:05.852 "data_offset": 256, 00:39:05.852 "data_size": 7936 00:39:05.852 } 00:39:05.852 ] 00:39:05.852 } 00:39:05.852 } 00:39:05.852 }' 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:39:05.852 BaseBdev2' 00:39:05.852 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.111 [2024-10-09 14:08:12.534810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:06.111 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:06.112 "name": "Existed_Raid", 00:39:06.112 "uuid": "a0ce6eb2-f78d-4d6e-8849-d40f604aabd5", 00:39:06.112 "strip_size_kb": 0, 00:39:06.112 "state": "online", 00:39:06.112 "raid_level": "raid1", 00:39:06.112 "superblock": true, 00:39:06.112 "num_base_bdevs": 2, 00:39:06.112 "num_base_bdevs_discovered": 1, 00:39:06.112 "num_base_bdevs_operational": 1, 00:39:06.112 "base_bdevs_list": [ 00:39:06.112 { 00:39:06.112 "name": null, 00:39:06.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:06.112 "is_configured": false, 00:39:06.112 "data_offset": 0, 00:39:06.112 "data_size": 7936 00:39:06.112 }, 00:39:06.112 { 00:39:06.112 "name": "BaseBdev2", 00:39:06.112 "uuid": "6301da70-4f0d-4a9a-bd36-f4f037ecb1f9", 00:39:06.112 "is_configured": true, 00:39:06.112 "data_offset": 256, 00:39:06.112 "data_size": 7936 00:39:06.112 } 00:39:06.112 ] 00:39:06.112 }' 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:06.112 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.679 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:39:06.679 14:08:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.679 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.679 [2024-10-09 14:08:13.047591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:39:06.679 [2024-10-09 14:08:13.047817] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:06.679 [2024-10-09 14:08:13.060660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:06.680 [2024-10-09 14:08:13.060707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:06.680 [2024-10-09 14:08:13.060728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99222 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99222 ']' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99222 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99222 00:39:06.680 killing process with pid 99222 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99222' 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99222 00:39:06.680 [2024-10-09 14:08:13.147370] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:06.680 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99222 00:39:06.680 [2024-10-09 14:08:13.148462] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:06.939 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:39:06.939 00:39:06.939 real 0m4.004s 00:39:06.939 user 0m6.298s 00:39:06.939 sys 0m0.869s 00:39:06.939 ************************************ 00:39:06.939 END TEST raid_state_function_test_sb_md_interleaved 00:39:06.939 ************************************ 00:39:06.939 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:06.939 14:08:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:06.939 14:08:13 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:39:06.939 14:08:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:06.939 14:08:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:06.939 14:08:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:06.939 ************************************ 00:39:06.939 START TEST raid_superblock_test_md_interleaved 00:39:06.939 ************************************ 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99459 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99459 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99459 ']' 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:06.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:06.939 14:08:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:07.198 [2024-10-09 14:08:13.621797] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:07.198 [2024-10-09 14:08:13.622181] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99459 ] 00:39:07.456 [2024-10-09 14:08:13.799057] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.456 [2024-10-09 14:08:13.843164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.456 [2024-10-09 14:08:13.886360] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:07.456 [2024-10-09 14:08:13.886399] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.024 malloc1 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.024 [2024-10-09 14:08:14.466523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:08.024 [2024-10-09 14:08:14.466721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.024 [2024-10-09 14:08:14.466784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:08.024 [2024-10-09 14:08:14.466874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.024 [2024-10-09 14:08:14.469155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.024 [2024-10-09 14:08:14.469297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:08.024 pt1 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.024 malloc2 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.024 [2024-10-09 14:08:14.506276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:08.024 [2024-10-09 14:08:14.506340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:08.024 [2024-10-09 14:08:14.506363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:08.024 [2024-10-09 14:08:14.506380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:08.024 [2024-10-09 14:08:14.508957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:08.024 [2024-10-09 14:08:14.509120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:08.024 pt2 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.024 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.024 [2024-10-09 14:08:14.518314] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:08.025 [2024-10-09 14:08:14.520532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:08.025 [2024-10-09 14:08:14.520698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:39:08.025 [2024-10-09 14:08:14.520718] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:08.025 [2024-10-09 14:08:14.520800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:39:08.025 [2024-10-09 14:08:14.520874] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:39:08.025 [2024-10-09 14:08:14.520886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:39:08.025 [2024-10-09 14:08:14.520955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.025 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.283 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:08.283 "name": "raid_bdev1", 00:39:08.283 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:08.283 "strip_size_kb": 0, 00:39:08.283 "state": "online", 00:39:08.283 "raid_level": "raid1", 00:39:08.283 "superblock": true, 00:39:08.283 "num_base_bdevs": 2, 00:39:08.283 "num_base_bdevs_discovered": 2, 00:39:08.283 "num_base_bdevs_operational": 2, 00:39:08.283 "base_bdevs_list": [ 00:39:08.283 { 00:39:08.283 "name": "pt1", 00:39:08.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:08.283 "is_configured": true, 00:39:08.283 "data_offset": 256, 00:39:08.283 "data_size": 7936 00:39:08.283 }, 00:39:08.283 { 00:39:08.283 "name": "pt2", 00:39:08.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.283 "is_configured": true, 00:39:08.283 "data_offset": 256, 00:39:08.283 "data_size": 7936 00:39:08.283 } 00:39:08.283 ] 00:39:08.283 }' 00:39:08.283 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:08.283 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.542 [2024-10-09 14:08:14.970764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.542 14:08:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:08.542 "name": "raid_bdev1", 00:39:08.542 "aliases": [ 00:39:08.542 "f7e73284-3fea-4a76-9b15-47a9953a8149" 00:39:08.542 ], 00:39:08.542 "product_name": "Raid Volume", 00:39:08.542 "block_size": 4128, 00:39:08.542 "num_blocks": 7936, 00:39:08.542 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:08.542 "md_size": 32, 00:39:08.542 "md_interleave": true, 00:39:08.542 "dif_type": 0, 00:39:08.542 "assigned_rate_limits": { 00:39:08.542 "rw_ios_per_sec": 0, 00:39:08.542 "rw_mbytes_per_sec": 0, 00:39:08.542 "r_mbytes_per_sec": 0, 00:39:08.542 "w_mbytes_per_sec": 0 00:39:08.542 }, 00:39:08.542 "claimed": false, 00:39:08.542 "zoned": false, 00:39:08.542 "supported_io_types": { 00:39:08.542 "read": true, 00:39:08.542 "write": true, 00:39:08.542 "unmap": false, 00:39:08.542 "flush": false, 00:39:08.542 "reset": true, 00:39:08.542 "nvme_admin": false, 00:39:08.542 "nvme_io": false, 00:39:08.542 "nvme_io_md": false, 00:39:08.542 "write_zeroes": true, 00:39:08.542 "zcopy": false, 00:39:08.542 "get_zone_info": false, 00:39:08.542 "zone_management": false, 00:39:08.542 "zone_append": false, 00:39:08.542 "compare": false, 00:39:08.542 "compare_and_write": false, 00:39:08.542 "abort": false, 00:39:08.542 "seek_hole": false, 00:39:08.542 "seek_data": false, 00:39:08.542 "copy": false, 00:39:08.542 "nvme_iov_md": false 00:39:08.542 }, 00:39:08.542 "memory_domains": [ 00:39:08.542 { 00:39:08.542 "dma_device_id": "system", 00:39:08.542 "dma_device_type": 1 00:39:08.542 }, 00:39:08.542 { 00:39:08.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.542 "dma_device_type": 2 00:39:08.542 }, 00:39:08.542 { 00:39:08.542 "dma_device_id": "system", 00:39:08.542 "dma_device_type": 1 00:39:08.542 }, 00:39:08.542 { 00:39:08.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:08.542 "dma_device_type": 2 00:39:08.542 } 00:39:08.542 ], 00:39:08.542 "driver_specific": { 00:39:08.542 "raid": { 00:39:08.542 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:08.542 "strip_size_kb": 0, 00:39:08.542 "state": "online", 00:39:08.542 "raid_level": "raid1", 00:39:08.542 "superblock": true, 00:39:08.542 "num_base_bdevs": 2, 00:39:08.542 "num_base_bdevs_discovered": 2, 00:39:08.542 "num_base_bdevs_operational": 2, 00:39:08.542 "base_bdevs_list": [ 00:39:08.542 { 00:39:08.542 "name": "pt1", 00:39:08.542 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:08.542 "is_configured": true, 00:39:08.542 "data_offset": 256, 00:39:08.542 "data_size": 7936 00:39:08.542 }, 00:39:08.542 { 00:39:08.542 "name": "pt2", 00:39:08.542 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:08.542 "is_configured": true, 00:39:08.542 "data_offset": 256, 00:39:08.542 "data_size": 7936 00:39:08.542 } 00:39:08.542 ] 00:39:08.542 } 00:39:08.542 } 00:39:08.542 }' 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:08.542 pt2' 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.542 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:39:08.800 [2024-10-09 14:08:15.182663] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f7e73284-3fea-4a76-9b15-47a9953a8149 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f7e73284-3fea-4a76-9b15-47a9953a8149 ']' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 [2024-10-09 14:08:15.218419] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:08.800 [2024-10-09 14:08:15.218572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:08.800 [2024-10-09 14:08:15.218695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:08.800 [2024-10-09 14:08:15.218773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:08.800 [2024-10-09 14:08:15.218785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.800 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:08.800 [2024-10-09 14:08:15.342475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:39:08.800 [2024-10-09 14:08:15.344773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:39:08.800 [2024-10-09 14:08:15.344957] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:39:08.800 [2024-10-09 14:08:15.345008] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:39:08.800 [2024-10-09 14:08:15.345029] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:08.800 [2024-10-09 14:08:15.345045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:39:09.057 request: 00:39:09.057 { 00:39:09.057 "name": "raid_bdev1", 00:39:09.057 "raid_level": "raid1", 00:39:09.057 "base_bdevs": [ 00:39:09.057 "malloc1", 00:39:09.057 "malloc2" 00:39:09.057 ], 00:39:09.057 "superblock": false, 00:39:09.057 "method": "bdev_raid_create", 00:39:09.057 "req_id": 1 00:39:09.057 } 00:39:09.057 Got JSON-RPC error response 00:39:09.057 response: 00:39:09.057 { 00:39:09.057 "code": -17, 00:39:09.057 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:39:09.057 } 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.057 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.057 [2024-10-09 14:08:15.410427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:09.058 [2024-10-09 14:08:15.410598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.058 [2024-10-09 14:08:15.410658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:39:09.058 [2024-10-09 14:08:15.410756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.058 [2024-10-09 14:08:15.413003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.058 [2024-10-09 14:08:15.413123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:09.058 [2024-10-09 14:08:15.413275] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:09.058 [2024-10-09 14:08:15.413321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:09.058 pt1 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.058 "name": "raid_bdev1", 00:39:09.058 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:09.058 "strip_size_kb": 0, 00:39:09.058 "state": "configuring", 00:39:09.058 "raid_level": "raid1", 00:39:09.058 "superblock": true, 00:39:09.058 "num_base_bdevs": 2, 00:39:09.058 "num_base_bdevs_discovered": 1, 00:39:09.058 "num_base_bdevs_operational": 2, 00:39:09.058 "base_bdevs_list": [ 00:39:09.058 { 00:39:09.058 "name": "pt1", 00:39:09.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:09.058 "is_configured": true, 00:39:09.058 "data_offset": 256, 00:39:09.058 "data_size": 7936 00:39:09.058 }, 00:39:09.058 { 00:39:09.058 "name": null, 00:39:09.058 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:09.058 "is_configured": false, 00:39:09.058 "data_offset": 256, 00:39:09.058 "data_size": 7936 00:39:09.058 } 00:39:09.058 ] 00:39:09.058 }' 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.058 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.623 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.624 [2024-10-09 14:08:15.874579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:09.624 [2024-10-09 14:08:15.874648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:09.624 [2024-10-09 14:08:15.874676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:09.624 [2024-10-09 14:08:15.874689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:09.624 [2024-10-09 14:08:15.874862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:09.624 [2024-10-09 14:08:15.874877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:09.624 [2024-10-09 14:08:15.874934] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:09.624 [2024-10-09 14:08:15.874956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:09.624 [2024-10-09 14:08:15.875041] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:39:09.624 [2024-10-09 14:08:15.875052] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:09.624 [2024-10-09 14:08:15.875137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:09.624 [2024-10-09 14:08:15.875194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:39:09.624 [2024-10-09 14:08:15.875210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:39:09.624 [2024-10-09 14:08:15.875271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:09.624 pt2 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:09.624 "name": "raid_bdev1", 00:39:09.624 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:09.624 "strip_size_kb": 0, 00:39:09.624 "state": "online", 00:39:09.624 "raid_level": "raid1", 00:39:09.624 "superblock": true, 00:39:09.624 "num_base_bdevs": 2, 00:39:09.624 "num_base_bdevs_discovered": 2, 00:39:09.624 "num_base_bdevs_operational": 2, 00:39:09.624 "base_bdevs_list": [ 00:39:09.624 { 00:39:09.624 "name": "pt1", 00:39:09.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:09.624 "is_configured": true, 00:39:09.624 "data_offset": 256, 00:39:09.624 "data_size": 7936 00:39:09.624 }, 00:39:09.624 { 00:39:09.624 "name": "pt2", 00:39:09.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:09.624 "is_configured": true, 00:39:09.624 "data_offset": 256, 00:39:09.624 "data_size": 7936 00:39:09.624 } 00:39:09.624 ] 00:39:09.624 }' 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:09.624 14:08:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.882 [2024-10-09 14:08:16.346955] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:09.882 "name": "raid_bdev1", 00:39:09.882 "aliases": [ 00:39:09.882 "f7e73284-3fea-4a76-9b15-47a9953a8149" 00:39:09.882 ], 00:39:09.882 "product_name": "Raid Volume", 00:39:09.882 "block_size": 4128, 00:39:09.882 "num_blocks": 7936, 00:39:09.882 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:09.882 "md_size": 32, 00:39:09.882 "md_interleave": true, 00:39:09.882 "dif_type": 0, 00:39:09.882 "assigned_rate_limits": { 00:39:09.882 "rw_ios_per_sec": 0, 00:39:09.882 "rw_mbytes_per_sec": 0, 00:39:09.882 "r_mbytes_per_sec": 0, 00:39:09.882 "w_mbytes_per_sec": 0 00:39:09.882 }, 00:39:09.882 "claimed": false, 00:39:09.882 "zoned": false, 00:39:09.882 "supported_io_types": { 00:39:09.882 "read": true, 00:39:09.882 "write": true, 00:39:09.882 "unmap": false, 00:39:09.882 "flush": false, 00:39:09.882 "reset": true, 00:39:09.882 "nvme_admin": false, 00:39:09.882 "nvme_io": false, 00:39:09.882 "nvme_io_md": false, 00:39:09.882 "write_zeroes": true, 00:39:09.882 "zcopy": false, 00:39:09.882 "get_zone_info": false, 00:39:09.882 "zone_management": false, 00:39:09.882 "zone_append": false, 00:39:09.882 "compare": false, 00:39:09.882 "compare_and_write": false, 00:39:09.882 "abort": false, 00:39:09.882 "seek_hole": false, 00:39:09.882 "seek_data": false, 00:39:09.882 "copy": false, 00:39:09.882 "nvme_iov_md": false 00:39:09.882 }, 00:39:09.882 "memory_domains": [ 00:39:09.882 { 00:39:09.882 "dma_device_id": "system", 00:39:09.882 "dma_device_type": 1 00:39:09.882 }, 00:39:09.882 { 00:39:09.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:09.882 "dma_device_type": 2 00:39:09.882 }, 00:39:09.882 { 00:39:09.882 "dma_device_id": "system", 00:39:09.882 "dma_device_type": 1 00:39:09.882 }, 00:39:09.882 { 00:39:09.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:39:09.882 "dma_device_type": 2 00:39:09.882 } 00:39:09.882 ], 00:39:09.882 "driver_specific": { 00:39:09.882 "raid": { 00:39:09.882 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:09.882 "strip_size_kb": 0, 00:39:09.882 "state": "online", 00:39:09.882 "raid_level": "raid1", 00:39:09.882 "superblock": true, 00:39:09.882 "num_base_bdevs": 2, 00:39:09.882 "num_base_bdevs_discovered": 2, 00:39:09.882 "num_base_bdevs_operational": 2, 00:39:09.882 "base_bdevs_list": [ 00:39:09.882 { 00:39:09.882 "name": "pt1", 00:39:09.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:39:09.882 "is_configured": true, 00:39:09.882 "data_offset": 256, 00:39:09.882 "data_size": 7936 00:39:09.882 }, 00:39:09.882 { 00:39:09.882 "name": "pt2", 00:39:09.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:09.882 "is_configured": true, 00:39:09.882 "data_offset": 256, 00:39:09.882 "data_size": 7936 00:39:09.882 } 00:39:09.882 ] 00:39:09.882 } 00:39:09.882 } 00:39:09.882 }' 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:39:09.882 pt2' 00:39:09.882 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.140 [2024-10-09 14:08:16.574944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f7e73284-3fea-4a76-9b15-47a9953a8149 '!=' f7e73284-3fea-4a76-9b15-47a9953a8149 ']' 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.140 [2024-10-09 14:08:16.610728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:10.140 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:10.141 "name": "raid_bdev1", 00:39:10.141 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:10.141 "strip_size_kb": 0, 00:39:10.141 "state": "online", 00:39:10.141 "raid_level": "raid1", 00:39:10.141 "superblock": true, 00:39:10.141 "num_base_bdevs": 2, 00:39:10.141 "num_base_bdevs_discovered": 1, 00:39:10.141 "num_base_bdevs_operational": 1, 00:39:10.141 "base_bdevs_list": [ 00:39:10.141 { 00:39:10.141 "name": null, 00:39:10.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:10.141 "is_configured": false, 00:39:10.141 "data_offset": 0, 00:39:10.141 "data_size": 7936 00:39:10.141 }, 00:39:10.141 { 00:39:10.141 "name": "pt2", 00:39:10.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.141 "is_configured": true, 00:39:10.141 "data_offset": 256, 00:39:10.141 "data_size": 7936 00:39:10.141 } 00:39:10.141 ] 00:39:10.141 }' 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:10.141 14:08:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 [2024-10-09 14:08:17.082799] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:10.707 [2024-10-09 14:08:17.082831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:10.707 [2024-10-09 14:08:17.082910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:10.707 [2024-10-09 14:08:17.082963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:10.707 [2024-10-09 14:08:17.082974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.707 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.708 [2024-10-09 14:08:17.146804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:39:10.708 [2024-10-09 14:08:17.146856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:10.708 [2024-10-09 14:08:17.146878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:39:10.708 [2024-10-09 14:08:17.146889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:10.708 [2024-10-09 14:08:17.149218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:10.708 [2024-10-09 14:08:17.149247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:39:10.708 [2024-10-09 14:08:17.149304] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:39:10.708 [2024-10-09 14:08:17.149335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:10.708 [2024-10-09 14:08:17.149396] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:39:10.708 [2024-10-09 14:08:17.149406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:10.708 [2024-10-09 14:08:17.149494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:10.708 [2024-10-09 14:08:17.149548] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:39:10.708 [2024-10-09 14:08:17.149573] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:39:10.708 [2024-10-09 14:08:17.149638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:10.708 pt2 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:10.708 "name": "raid_bdev1", 00:39:10.708 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:10.708 "strip_size_kb": 0, 00:39:10.708 "state": "online", 00:39:10.708 "raid_level": "raid1", 00:39:10.708 "superblock": true, 00:39:10.708 "num_base_bdevs": 2, 00:39:10.708 "num_base_bdevs_discovered": 1, 00:39:10.708 "num_base_bdevs_operational": 1, 00:39:10.708 "base_bdevs_list": [ 00:39:10.708 { 00:39:10.708 "name": null, 00:39:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:10.708 "is_configured": false, 00:39:10.708 "data_offset": 256, 00:39:10.708 "data_size": 7936 00:39:10.708 }, 00:39:10.708 { 00:39:10.708 "name": "pt2", 00:39:10.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:10.708 "is_configured": true, 00:39:10.708 "data_offset": 256, 00:39:10.708 "data_size": 7936 00:39:10.708 } 00:39:10.708 ] 00:39:10.708 }' 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:10.708 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.275 [2024-10-09 14:08:17.594947] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:11.275 [2024-10-09 14:08:17.594976] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:11.275 [2024-10-09 14:08:17.595051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:11.275 [2024-10-09 14:08:17.595101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:11.275 [2024-10-09 14:08:17.595116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.275 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.275 [2024-10-09 14:08:17.654929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:39:11.275 [2024-10-09 14:08:17.654992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:11.275 [2024-10-09 14:08:17.655015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:39:11.275 [2024-10-09 14:08:17.655034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:11.276 [2024-10-09 14:08:17.657327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:11.276 [2024-10-09 14:08:17.657368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:39:11.276 [2024-10-09 14:08:17.657422] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:39:11.276 [2024-10-09 14:08:17.657461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:39:11.276 [2024-10-09 14:08:17.657546] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:39:11.276 [2024-10-09 14:08:17.657575] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:11.276 [2024-10-09 14:08:17.657595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:39:11.276 [2024-10-09 14:08:17.657655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:39:11.276 [2024-10-09 14:08:17.657721] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:39:11.276 [2024-10-09 14:08:17.657734] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:11.276 [2024-10-09 14:08:17.657798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:11.276 [2024-10-09 14:08:17.657853] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:39:11.276 [2024-10-09 14:08:17.657862] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:39:11.276 [2024-10-09 14:08:17.657927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:11.276 pt1 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:11.276 "name": "raid_bdev1", 00:39:11.276 "uuid": "f7e73284-3fea-4a76-9b15-47a9953a8149", 00:39:11.276 "strip_size_kb": 0, 00:39:11.276 "state": "online", 00:39:11.276 "raid_level": "raid1", 00:39:11.276 "superblock": true, 00:39:11.276 "num_base_bdevs": 2, 00:39:11.276 "num_base_bdevs_discovered": 1, 00:39:11.276 "num_base_bdevs_operational": 1, 00:39:11.276 "base_bdevs_list": [ 00:39:11.276 { 00:39:11.276 "name": null, 00:39:11.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:11.276 "is_configured": false, 00:39:11.276 "data_offset": 256, 00:39:11.276 "data_size": 7936 00:39:11.276 }, 00:39:11.276 { 00:39:11.276 "name": "pt2", 00:39:11.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:39:11.276 "is_configured": true, 00:39:11.276 "data_offset": 256, 00:39:11.276 "data_size": 7936 00:39:11.276 } 00:39:11.276 ] 00:39:11.276 }' 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:11.276 14:08:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.843 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:11.844 [2024-10-09 14:08:18.155258] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f7e73284-3fea-4a76-9b15-47a9953a8149 '!=' f7e73284-3fea-4a76-9b15-47a9953a8149 ']' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99459 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99459 ']' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99459 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99459 00:39:11.844 killing process with pid 99459 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99459' 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99459 00:39:11.844 [2024-10-09 14:08:18.234333] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:11.844 [2024-10-09 14:08:18.234403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:11.844 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99459 00:39:11.844 [2024-10-09 14:08:18.234449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:11.844 [2024-10-09 14:08:18.234459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:39:11.844 [2024-10-09 14:08:18.259413] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:12.103 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:39:12.103 00:39:12.103 real 0m5.027s 00:39:12.103 user 0m8.240s 00:39:12.103 sys 0m1.131s 00:39:12.103 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:12.103 ************************************ 00:39:12.103 END TEST raid_superblock_test_md_interleaved 00:39:12.103 14:08:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:12.103 ************************************ 00:39:12.103 14:08:18 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:39:12.103 14:08:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:39:12.103 14:08:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:12.103 14:08:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:12.103 ************************************ 00:39:12.103 START TEST raid_rebuild_test_sb_md_interleaved 00:39:12.103 ************************************ 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99775 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99775 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99775 ']' 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:12.103 14:08:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:12.362 [2024-10-09 14:08:18.683112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:12.362 I/O size of 3145728 is greater than zero copy threshold (65536). 00:39:12.362 Zero copy mechanism will not be used. 00:39:12.362 [2024-10-09 14:08:18.683301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99775 ] 00:39:12.362 [2024-10-09 14:08:18.860361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.362 [2024-10-09 14:08:18.902693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.622 [2024-10-09 14:08:18.945726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:12.622 [2024-10-09 14:08:18.945774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 BaseBdev1_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 [2024-10-09 14:08:19.649656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:13.260 [2024-10-09 14:08:19.649711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:13.260 [2024-10-09 14:08:19.649747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:39:13.260 [2024-10-09 14:08:19.649761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:13.260 [2024-10-09 14:08:19.652060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:13.260 [2024-10-09 14:08:19.652096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:13.260 BaseBdev1 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 BaseBdev2_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 [2024-10-09 14:08:19.679507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:39:13.260 [2024-10-09 14:08:19.679580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:13.260 [2024-10-09 14:08:19.679609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:39:13.260 [2024-10-09 14:08:19.679624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:13.260 [2024-10-09 14:08:19.682198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:13.260 [2024-10-09 14:08:19.682232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:39:13.260 BaseBdev2 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 spare_malloc 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 spare_delay 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 [2024-10-09 14:08:19.720693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:13.260 [2024-10-09 14:08:19.720746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:13.260 [2024-10-09 14:08:19.720771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:39:13.260 [2024-10-09 14:08:19.720783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:13.260 [2024-10-09 14:08:19.723009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:13.260 [2024-10-09 14:08:19.723054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:13.260 spare 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.260 [2024-10-09 14:08:19.728715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:13.260 [2024-10-09 14:08:19.730909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:13.260 [2024-10-09 14:08:19.731069] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:39:13.260 [2024-10-09 14:08:19.731082] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:13.260 [2024-10-09 14:08:19.731201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:39:13.260 [2024-10-09 14:08:19.731276] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:39:13.260 [2024-10-09 14:08:19.731289] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:39:13.260 [2024-10-09 14:08:19.731357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.260 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.261 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.261 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.261 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:13.261 "name": "raid_bdev1", 00:39:13.261 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:13.261 "strip_size_kb": 0, 00:39:13.261 "state": "online", 00:39:13.261 "raid_level": "raid1", 00:39:13.261 "superblock": true, 00:39:13.261 "num_base_bdevs": 2, 00:39:13.261 "num_base_bdevs_discovered": 2, 00:39:13.261 "num_base_bdevs_operational": 2, 00:39:13.261 "base_bdevs_list": [ 00:39:13.261 { 00:39:13.261 "name": "BaseBdev1", 00:39:13.261 "uuid": "81b4ecfd-4f10-5547-8e26-ffa9bf4752ba", 00:39:13.261 "is_configured": true, 00:39:13.261 "data_offset": 256, 00:39:13.261 "data_size": 7936 00:39:13.261 }, 00:39:13.261 { 00:39:13.261 "name": "BaseBdev2", 00:39:13.261 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:13.261 "is_configured": true, 00:39:13.261 "data_offset": 256, 00:39:13.261 "data_size": 7936 00:39:13.261 } 00:39:13.261 ] 00:39:13.261 }' 00:39:13.261 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:13.261 14:08:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.830 [2024-10-09 14:08:20.185145] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:39:13.830 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.831 [2024-10-09 14:08:20.268827] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:13.831 "name": "raid_bdev1", 00:39:13.831 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:13.831 "strip_size_kb": 0, 00:39:13.831 "state": "online", 00:39:13.831 "raid_level": "raid1", 00:39:13.831 "superblock": true, 00:39:13.831 "num_base_bdevs": 2, 00:39:13.831 "num_base_bdevs_discovered": 1, 00:39:13.831 "num_base_bdevs_operational": 1, 00:39:13.831 "base_bdevs_list": [ 00:39:13.831 { 00:39:13.831 "name": null, 00:39:13.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:13.831 "is_configured": false, 00:39:13.831 "data_offset": 0, 00:39:13.831 "data_size": 7936 00:39:13.831 }, 00:39:13.831 { 00:39:13.831 "name": "BaseBdev2", 00:39:13.831 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:13.831 "is_configured": true, 00:39:13.831 "data_offset": 256, 00:39:13.831 "data_size": 7936 00:39:13.831 } 00:39:13.831 ] 00:39:13.831 }' 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:13.831 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:14.399 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:14.399 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:14.399 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:14.399 [2024-10-09 14:08:20.648947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:14.399 [2024-10-09 14:08:20.651989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:39:14.399 [2024-10-09 14:08:20.654266] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:14.399 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:14.399 14:08:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.337 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:15.337 "name": "raid_bdev1", 00:39:15.337 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:15.337 "strip_size_kb": 0, 00:39:15.337 "state": "online", 00:39:15.337 "raid_level": "raid1", 00:39:15.337 "superblock": true, 00:39:15.337 "num_base_bdevs": 2, 00:39:15.337 "num_base_bdevs_discovered": 2, 00:39:15.337 "num_base_bdevs_operational": 2, 00:39:15.337 "process": { 00:39:15.337 "type": "rebuild", 00:39:15.337 "target": "spare", 00:39:15.337 "progress": { 00:39:15.337 "blocks": 2560, 00:39:15.337 "percent": 32 00:39:15.337 } 00:39:15.337 }, 00:39:15.337 "base_bdevs_list": [ 00:39:15.338 { 00:39:15.338 "name": "spare", 00:39:15.338 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:15.338 "is_configured": true, 00:39:15.338 "data_offset": 256, 00:39:15.338 "data_size": 7936 00:39:15.338 }, 00:39:15.338 { 00:39:15.338 "name": "BaseBdev2", 00:39:15.338 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:15.338 "is_configured": true, 00:39:15.338 "data_offset": 256, 00:39:15.338 "data_size": 7936 00:39:15.338 } 00:39:15.338 ] 00:39:15.338 }' 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.338 [2024-10-09 14:08:21.803390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:15.338 [2024-10-09 14:08:21.862826] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:15.338 [2024-10-09 14:08:21.862884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:15.338 [2024-10-09 14:08:21.862902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:15.338 [2024-10-09 14:08:21.862911] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.338 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.598 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.598 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:15.598 "name": "raid_bdev1", 00:39:15.598 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:15.598 "strip_size_kb": 0, 00:39:15.598 "state": "online", 00:39:15.598 "raid_level": "raid1", 00:39:15.598 "superblock": true, 00:39:15.598 "num_base_bdevs": 2, 00:39:15.598 "num_base_bdevs_discovered": 1, 00:39:15.598 "num_base_bdevs_operational": 1, 00:39:15.598 "base_bdevs_list": [ 00:39:15.598 { 00:39:15.598 "name": null, 00:39:15.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.598 "is_configured": false, 00:39:15.598 "data_offset": 0, 00:39:15.598 "data_size": 7936 00:39:15.598 }, 00:39:15.598 { 00:39:15.598 "name": "BaseBdev2", 00:39:15.598 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:15.598 "is_configured": true, 00:39:15.598 "data_offset": 256, 00:39:15.598 "data_size": 7936 00:39:15.598 } 00:39:15.598 ] 00:39:15.598 }' 00:39:15.598 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:15.598 14:08:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:15.857 "name": "raid_bdev1", 00:39:15.857 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:15.857 "strip_size_kb": 0, 00:39:15.857 "state": "online", 00:39:15.857 "raid_level": "raid1", 00:39:15.857 "superblock": true, 00:39:15.857 "num_base_bdevs": 2, 00:39:15.857 "num_base_bdevs_discovered": 1, 00:39:15.857 "num_base_bdevs_operational": 1, 00:39:15.857 "base_bdevs_list": [ 00:39:15.857 { 00:39:15.857 "name": null, 00:39:15.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:15.857 "is_configured": false, 00:39:15.857 "data_offset": 0, 00:39:15.857 "data_size": 7936 00:39:15.857 }, 00:39:15.857 { 00:39:15.857 "name": "BaseBdev2", 00:39:15.857 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:15.857 "is_configured": true, 00:39:15.857 "data_offset": 256, 00:39:15.857 "data_size": 7936 00:39:15.857 } 00:39:15.857 ] 00:39:15.857 }' 00:39:15.857 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:16.116 [2024-10-09 14:08:22.467167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:16.116 [2024-10-09 14:08:22.470161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:39:16.116 [2024-10-09 14:08:22.472382] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:16.116 14:08:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:17.055 "name": "raid_bdev1", 00:39:17.055 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:17.055 "strip_size_kb": 0, 00:39:17.055 "state": "online", 00:39:17.055 "raid_level": "raid1", 00:39:17.055 "superblock": true, 00:39:17.055 "num_base_bdevs": 2, 00:39:17.055 "num_base_bdevs_discovered": 2, 00:39:17.055 "num_base_bdevs_operational": 2, 00:39:17.055 "process": { 00:39:17.055 "type": "rebuild", 00:39:17.055 "target": "spare", 00:39:17.055 "progress": { 00:39:17.055 "blocks": 2560, 00:39:17.055 "percent": 32 00:39:17.055 } 00:39:17.055 }, 00:39:17.055 "base_bdevs_list": [ 00:39:17.055 { 00:39:17.055 "name": "spare", 00:39:17.055 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:17.055 "is_configured": true, 00:39:17.055 "data_offset": 256, 00:39:17.055 "data_size": 7936 00:39:17.055 }, 00:39:17.055 { 00:39:17.055 "name": "BaseBdev2", 00:39:17.055 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:17.055 "is_configured": true, 00:39:17.055 "data_offset": 256, 00:39:17.055 "data_size": 7936 00:39:17.055 } 00:39:17.055 ] 00:39:17.055 }' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:39:17.055 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=637 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:17.055 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:17.315 "name": "raid_bdev1", 00:39:17.315 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:17.315 "strip_size_kb": 0, 00:39:17.315 "state": "online", 00:39:17.315 "raid_level": "raid1", 00:39:17.315 "superblock": true, 00:39:17.315 "num_base_bdevs": 2, 00:39:17.315 "num_base_bdevs_discovered": 2, 00:39:17.315 "num_base_bdevs_operational": 2, 00:39:17.315 "process": { 00:39:17.315 "type": "rebuild", 00:39:17.315 "target": "spare", 00:39:17.315 "progress": { 00:39:17.315 "blocks": 2816, 00:39:17.315 "percent": 35 00:39:17.315 } 00:39:17.315 }, 00:39:17.315 "base_bdevs_list": [ 00:39:17.315 { 00:39:17.315 "name": "spare", 00:39:17.315 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:17.315 "is_configured": true, 00:39:17.315 "data_offset": 256, 00:39:17.315 "data_size": 7936 00:39:17.315 }, 00:39:17.315 { 00:39:17.315 "name": "BaseBdev2", 00:39:17.315 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:17.315 "is_configured": true, 00:39:17.315 "data_offset": 256, 00:39:17.315 "data_size": 7936 00:39:17.315 } 00:39:17.315 ] 00:39:17.315 }' 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:17.315 14:08:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:18.253 "name": "raid_bdev1", 00:39:18.253 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:18.253 "strip_size_kb": 0, 00:39:18.253 "state": "online", 00:39:18.253 "raid_level": "raid1", 00:39:18.253 "superblock": true, 00:39:18.253 "num_base_bdevs": 2, 00:39:18.253 "num_base_bdevs_discovered": 2, 00:39:18.253 "num_base_bdevs_operational": 2, 00:39:18.253 "process": { 00:39:18.253 "type": "rebuild", 00:39:18.253 "target": "spare", 00:39:18.253 "progress": { 00:39:18.253 "blocks": 5632, 00:39:18.253 "percent": 70 00:39:18.253 } 00:39:18.253 }, 00:39:18.253 "base_bdevs_list": [ 00:39:18.253 { 00:39:18.253 "name": "spare", 00:39:18.253 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:18.253 "is_configured": true, 00:39:18.253 "data_offset": 256, 00:39:18.253 "data_size": 7936 00:39:18.253 }, 00:39:18.253 { 00:39:18.253 "name": "BaseBdev2", 00:39:18.253 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:18.253 "is_configured": true, 00:39:18.253 "data_offset": 256, 00:39:18.253 "data_size": 7936 00:39:18.253 } 00:39:18.253 ] 00:39:18.253 }' 00:39:18.253 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:18.512 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:18.512 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:18.512 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:18.512 14:08:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:39:19.080 [2024-10-09 14:08:25.590318] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:39:19.080 [2024-10-09 14:08:25.590395] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:39:19.080 [2024-10-09 14:08:25.590489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.340 "name": "raid_bdev1", 00:39:19.340 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:19.340 "strip_size_kb": 0, 00:39:19.340 "state": "online", 00:39:19.340 "raid_level": "raid1", 00:39:19.340 "superblock": true, 00:39:19.340 "num_base_bdevs": 2, 00:39:19.340 "num_base_bdevs_discovered": 2, 00:39:19.340 "num_base_bdevs_operational": 2, 00:39:19.340 "base_bdevs_list": [ 00:39:19.340 { 00:39:19.340 "name": "spare", 00:39:19.340 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:19.340 "is_configured": true, 00:39:19.340 "data_offset": 256, 00:39:19.340 "data_size": 7936 00:39:19.340 }, 00:39:19.340 { 00:39:19.340 "name": "BaseBdev2", 00:39:19.340 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:19.340 "is_configured": true, 00:39:19.340 "data_offset": 256, 00:39:19.340 "data_size": 7936 00:39:19.340 } 00:39:19.340 ] 00:39:19.340 }' 00:39:19.340 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:19.601 "name": "raid_bdev1", 00:39:19.601 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:19.601 "strip_size_kb": 0, 00:39:19.601 "state": "online", 00:39:19.601 "raid_level": "raid1", 00:39:19.601 "superblock": true, 00:39:19.601 "num_base_bdevs": 2, 00:39:19.601 "num_base_bdevs_discovered": 2, 00:39:19.601 "num_base_bdevs_operational": 2, 00:39:19.601 "base_bdevs_list": [ 00:39:19.601 { 00:39:19.601 "name": "spare", 00:39:19.601 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:19.601 "is_configured": true, 00:39:19.601 "data_offset": 256, 00:39:19.601 "data_size": 7936 00:39:19.601 }, 00:39:19.601 { 00:39:19.601 "name": "BaseBdev2", 00:39:19.601 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:19.601 "is_configured": true, 00:39:19.601 "data_offset": 256, 00:39:19.601 "data_size": 7936 00:39:19.601 } 00:39:19.601 ] 00:39:19.601 }' 00:39:19.601 14:08:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:19.601 "name": "raid_bdev1", 00:39:19.601 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:19.601 "strip_size_kb": 0, 00:39:19.601 "state": "online", 00:39:19.601 "raid_level": "raid1", 00:39:19.601 "superblock": true, 00:39:19.601 "num_base_bdevs": 2, 00:39:19.601 "num_base_bdevs_discovered": 2, 00:39:19.601 "num_base_bdevs_operational": 2, 00:39:19.601 "base_bdevs_list": [ 00:39:19.601 { 00:39:19.601 "name": "spare", 00:39:19.601 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:19.601 "is_configured": true, 00:39:19.601 "data_offset": 256, 00:39:19.601 "data_size": 7936 00:39:19.601 }, 00:39:19.601 { 00:39:19.601 "name": "BaseBdev2", 00:39:19.601 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:19.601 "is_configured": true, 00:39:19.601 "data_offset": 256, 00:39:19.601 "data_size": 7936 00:39:19.601 } 00:39:19.601 ] 00:39:19.601 }' 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:19.601 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 [2024-10-09 14:08:26.502437] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:39:20.177 [2024-10-09 14:08:26.502614] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:39:20.177 [2024-10-09 14:08:26.502837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:20.177 [2024-10-09 14:08:26.503015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:20.177 [2024-10-09 14:08:26.503135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 [2024-10-09 14:08:26.566403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:20.177 [2024-10-09 14:08:26.566469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:20.177 [2024-10-09 14:08:26.566492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:39:20.177 [2024-10-09 14:08:26.566506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:20.177 [2024-10-09 14:08:26.568944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:20.177 [2024-10-09 14:08:26.568987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:20.177 [2024-10-09 14:08:26.569050] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:20.177 [2024-10-09 14:08:26.569108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:20.177 [2024-10-09 14:08:26.569207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:39:20.177 spare 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 [2024-10-09 14:08:26.669306] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:39:20.177 [2024-10-09 14:08:26.669348] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:39:20.177 [2024-10-09 14:08:26.669489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:39:20.177 [2024-10-09 14:08:26.669633] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:39:20.177 [2024-10-09 14:08:26.669648] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:39:20.177 [2024-10-09 14:08:26.669745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:20.177 "name": "raid_bdev1", 00:39:20.177 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:20.177 "strip_size_kb": 0, 00:39:20.177 "state": "online", 00:39:20.177 "raid_level": "raid1", 00:39:20.177 "superblock": true, 00:39:20.177 "num_base_bdevs": 2, 00:39:20.177 "num_base_bdevs_discovered": 2, 00:39:20.177 "num_base_bdevs_operational": 2, 00:39:20.177 "base_bdevs_list": [ 00:39:20.177 { 00:39:20.177 "name": "spare", 00:39:20.177 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:20.177 "is_configured": true, 00:39:20.177 "data_offset": 256, 00:39:20.177 "data_size": 7936 00:39:20.177 }, 00:39:20.177 { 00:39:20.177 "name": "BaseBdev2", 00:39:20.177 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:20.177 "is_configured": true, 00:39:20.177 "data_offset": 256, 00:39:20.177 "data_size": 7936 00:39:20.177 } 00:39:20.177 ] 00:39:20.177 }' 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:20.177 14:08:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.745 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:20.745 "name": "raid_bdev1", 00:39:20.745 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:20.745 "strip_size_kb": 0, 00:39:20.745 "state": "online", 00:39:20.745 "raid_level": "raid1", 00:39:20.745 "superblock": true, 00:39:20.745 "num_base_bdevs": 2, 00:39:20.745 "num_base_bdevs_discovered": 2, 00:39:20.745 "num_base_bdevs_operational": 2, 00:39:20.746 "base_bdevs_list": [ 00:39:20.746 { 00:39:20.746 "name": "spare", 00:39:20.746 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:20.746 "is_configured": true, 00:39:20.746 "data_offset": 256, 00:39:20.746 "data_size": 7936 00:39:20.746 }, 00:39:20.746 { 00:39:20.746 "name": "BaseBdev2", 00:39:20.746 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:20.746 "is_configured": true, 00:39:20.746 "data_offset": 256, 00:39:20.746 "data_size": 7936 00:39:20.746 } 00:39:20.746 ] 00:39:20.746 }' 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.746 [2024-10-09 14:08:27.254632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:20.746 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.005 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:21.005 "name": "raid_bdev1", 00:39:21.005 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:21.005 "strip_size_kb": 0, 00:39:21.005 "state": "online", 00:39:21.005 "raid_level": "raid1", 00:39:21.005 "superblock": true, 00:39:21.005 "num_base_bdevs": 2, 00:39:21.005 "num_base_bdevs_discovered": 1, 00:39:21.005 "num_base_bdevs_operational": 1, 00:39:21.005 "base_bdevs_list": [ 00:39:21.005 { 00:39:21.005 "name": null, 00:39:21.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:21.005 "is_configured": false, 00:39:21.005 "data_offset": 0, 00:39:21.005 "data_size": 7936 00:39:21.005 }, 00:39:21.005 { 00:39:21.005 "name": "BaseBdev2", 00:39:21.005 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:21.005 "is_configured": true, 00:39:21.005 "data_offset": 256, 00:39:21.005 "data_size": 7936 00:39:21.005 } 00:39:21.005 ] 00:39:21.005 }' 00:39:21.005 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:21.005 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:21.265 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:39:21.265 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:21.265 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:21.265 [2024-10-09 14:08:27.654795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:21.265 [2024-10-09 14:08:27.655156] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:21.265 [2024-10-09 14:08:27.655327] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:21.265 [2024-10-09 14:08:27.655457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:21.265 [2024-10-09 14:08:27.658478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:39:21.265 [2024-10-09 14:08:27.660920] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:21.265 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:21.265 14:08:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:22.203 "name": "raid_bdev1", 00:39:22.203 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:22.203 "strip_size_kb": 0, 00:39:22.203 "state": "online", 00:39:22.203 "raid_level": "raid1", 00:39:22.203 "superblock": true, 00:39:22.203 "num_base_bdevs": 2, 00:39:22.203 "num_base_bdevs_discovered": 2, 00:39:22.203 "num_base_bdevs_operational": 2, 00:39:22.203 "process": { 00:39:22.203 "type": "rebuild", 00:39:22.203 "target": "spare", 00:39:22.203 "progress": { 00:39:22.203 "blocks": 2560, 00:39:22.203 "percent": 32 00:39:22.203 } 00:39:22.203 }, 00:39:22.203 "base_bdevs_list": [ 00:39:22.203 { 00:39:22.203 "name": "spare", 00:39:22.203 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:22.203 "is_configured": true, 00:39:22.203 "data_offset": 256, 00:39:22.203 "data_size": 7936 00:39:22.203 }, 00:39:22.203 { 00:39:22.203 "name": "BaseBdev2", 00:39:22.203 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:22.203 "is_configured": true, 00:39:22.203 "data_offset": 256, 00:39:22.203 "data_size": 7936 00:39:22.203 } 00:39:22.203 ] 00:39:22.203 }' 00:39:22.203 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:22.463 [2024-10-09 14:08:28.797939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:22.463 [2024-10-09 14:08:28.867956] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:22.463 [2024-10-09 14:08:28.868012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:22.463 [2024-10-09 14:08:28.868047] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:22.463 [2024-10-09 14:08:28.868056] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:22.463 "name": "raid_bdev1", 00:39:22.463 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:22.463 "strip_size_kb": 0, 00:39:22.463 "state": "online", 00:39:22.463 "raid_level": "raid1", 00:39:22.463 "superblock": true, 00:39:22.463 "num_base_bdevs": 2, 00:39:22.463 "num_base_bdevs_discovered": 1, 00:39:22.463 "num_base_bdevs_operational": 1, 00:39:22.463 "base_bdevs_list": [ 00:39:22.463 { 00:39:22.463 "name": null, 00:39:22.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:22.463 "is_configured": false, 00:39:22.463 "data_offset": 0, 00:39:22.463 "data_size": 7936 00:39:22.463 }, 00:39:22.463 { 00:39:22.463 "name": "BaseBdev2", 00:39:22.463 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:22.463 "is_configured": true, 00:39:22.463 "data_offset": 256, 00:39:22.463 "data_size": 7936 00:39:22.463 } 00:39:22.463 ] 00:39:22.463 }' 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:22.463 14:08:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:23.031 14:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:39:23.031 14:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.031 14:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:23.031 [2024-10-09 14:08:29.323952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:39:23.031 [2024-10-09 14:08:29.324129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:23.031 [2024-10-09 14:08:29.324169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:39:23.031 [2024-10-09 14:08:29.324181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:23.031 [2024-10-09 14:08:29.324381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:23.031 [2024-10-09 14:08:29.324396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:39:23.031 [2024-10-09 14:08:29.324456] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:39:23.031 [2024-10-09 14:08:29.324469] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:39:23.031 [2024-10-09 14:08:29.324483] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:39:23.031 [2024-10-09 14:08:29.324507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:39:23.031 [2024-10-09 14:08:29.327345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:39:23.031 [2024-10-09 14:08:29.329696] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:39:23.031 spare 00:39:23.031 14:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.031 14:08:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:23.969 "name": "raid_bdev1", 00:39:23.969 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:23.969 "strip_size_kb": 0, 00:39:23.969 "state": "online", 00:39:23.969 "raid_level": "raid1", 00:39:23.969 "superblock": true, 00:39:23.969 "num_base_bdevs": 2, 00:39:23.969 "num_base_bdevs_discovered": 2, 00:39:23.969 "num_base_bdevs_operational": 2, 00:39:23.969 "process": { 00:39:23.969 "type": "rebuild", 00:39:23.969 "target": "spare", 00:39:23.969 "progress": { 00:39:23.969 "blocks": 2560, 00:39:23.969 "percent": 32 00:39:23.969 } 00:39:23.969 }, 00:39:23.969 "base_bdevs_list": [ 00:39:23.969 { 00:39:23.969 "name": "spare", 00:39:23.969 "uuid": "afc069a8-89ef-53c5-9d54-0b7ea7f26f6f", 00:39:23.969 "is_configured": true, 00:39:23.969 "data_offset": 256, 00:39:23.969 "data_size": 7936 00:39:23.969 }, 00:39:23.969 { 00:39:23.969 "name": "BaseBdev2", 00:39:23.969 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:23.969 "is_configured": true, 00:39:23.969 "data_offset": 256, 00:39:23.969 "data_size": 7936 00:39:23.969 } 00:39:23.969 ] 00:39:23.969 }' 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:23.969 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:23.969 [2024-10-09 14:08:30.482919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:24.228 [2024-10-09 14:08:30.536447] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:24.228 [2024-10-09 14:08:30.536528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:24.228 [2024-10-09 14:08:30.536544] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:24.228 [2024-10-09 14:08:30.536555] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:24.228 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.228 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:24.228 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:24.229 "name": "raid_bdev1", 00:39:24.229 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:24.229 "strip_size_kb": 0, 00:39:24.229 "state": "online", 00:39:24.229 "raid_level": "raid1", 00:39:24.229 "superblock": true, 00:39:24.229 "num_base_bdevs": 2, 00:39:24.229 "num_base_bdevs_discovered": 1, 00:39:24.229 "num_base_bdevs_operational": 1, 00:39:24.229 "base_bdevs_list": [ 00:39:24.229 { 00:39:24.229 "name": null, 00:39:24.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.229 "is_configured": false, 00:39:24.229 "data_offset": 0, 00:39:24.229 "data_size": 7936 00:39:24.229 }, 00:39:24.229 { 00:39:24.229 "name": "BaseBdev2", 00:39:24.229 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:24.229 "is_configured": true, 00:39:24.229 "data_offset": 256, 00:39:24.229 "data_size": 7936 00:39:24.229 } 00:39:24.229 ] 00:39:24.229 }' 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:24.229 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.487 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:24.487 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:24.487 14:08:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.487 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:24.747 "name": "raid_bdev1", 00:39:24.747 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:24.747 "strip_size_kb": 0, 00:39:24.747 "state": "online", 00:39:24.747 "raid_level": "raid1", 00:39:24.747 "superblock": true, 00:39:24.747 "num_base_bdevs": 2, 00:39:24.747 "num_base_bdevs_discovered": 1, 00:39:24.747 "num_base_bdevs_operational": 1, 00:39:24.747 "base_bdevs_list": [ 00:39:24.747 { 00:39:24.747 "name": null, 00:39:24.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:24.747 "is_configured": false, 00:39:24.747 "data_offset": 0, 00:39:24.747 "data_size": 7936 00:39:24.747 }, 00:39:24.747 { 00:39:24.747 "name": "BaseBdev2", 00:39:24.747 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:24.747 "is_configured": true, 00:39:24.747 "data_offset": 256, 00:39:24.747 "data_size": 7936 00:39:24.747 } 00:39:24.747 ] 00:39:24.747 }' 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:24.747 [2024-10-09 14:08:31.148410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:24.747 [2024-10-09 14:08:31.148475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:24.747 [2024-10-09 14:08:31.148497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:39:24.747 [2024-10-09 14:08:31.148511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:24.747 [2024-10-09 14:08:31.148682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:24.747 [2024-10-09 14:08:31.148700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:24.747 [2024-10-09 14:08:31.148750] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:24.747 [2024-10-09 14:08:31.148777] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:24.747 [2024-10-09 14:08:31.148793] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:24.747 [2024-10-09 14:08:31.148825] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:39:24.747 BaseBdev1 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:24.747 14:08:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:25.686 "name": "raid_bdev1", 00:39:25.686 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:25.686 "strip_size_kb": 0, 00:39:25.686 "state": "online", 00:39:25.686 "raid_level": "raid1", 00:39:25.686 "superblock": true, 00:39:25.686 "num_base_bdevs": 2, 00:39:25.686 "num_base_bdevs_discovered": 1, 00:39:25.686 "num_base_bdevs_operational": 1, 00:39:25.686 "base_bdevs_list": [ 00:39:25.686 { 00:39:25.686 "name": null, 00:39:25.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:25.686 "is_configured": false, 00:39:25.686 "data_offset": 0, 00:39:25.686 "data_size": 7936 00:39:25.686 }, 00:39:25.686 { 00:39:25.686 "name": "BaseBdev2", 00:39:25.686 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:25.686 "is_configured": true, 00:39:25.686 "data_offset": 256, 00:39:25.686 "data_size": 7936 00:39:25.686 } 00:39:25.686 ] 00:39:25.686 }' 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:25.686 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:26.253 "name": "raid_bdev1", 00:39:26.253 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:26.253 "strip_size_kb": 0, 00:39:26.253 "state": "online", 00:39:26.253 "raid_level": "raid1", 00:39:26.253 "superblock": true, 00:39:26.253 "num_base_bdevs": 2, 00:39:26.253 "num_base_bdevs_discovered": 1, 00:39:26.253 "num_base_bdevs_operational": 1, 00:39:26.253 "base_bdevs_list": [ 00:39:26.253 { 00:39:26.253 "name": null, 00:39:26.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:26.253 "is_configured": false, 00:39:26.253 "data_offset": 0, 00:39:26.253 "data_size": 7936 00:39:26.253 }, 00:39:26.253 { 00:39:26.253 "name": "BaseBdev2", 00:39:26.253 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:26.253 "is_configured": true, 00:39:26.253 "data_offset": 256, 00:39:26.253 "data_size": 7936 00:39:26.253 } 00:39:26.253 ] 00:39:26.253 }' 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:26.253 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:26.254 [2024-10-09 14:08:32.704845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:26.254 [2024-10-09 14:08:32.705035] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:26.254 [2024-10-09 14:08:32.705051] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:26.254 request: 00:39:26.254 { 00:39:26.254 "base_bdev": "BaseBdev1", 00:39:26.254 "raid_bdev": "raid_bdev1", 00:39:26.254 "method": "bdev_raid_add_base_bdev", 00:39:26.254 "req_id": 1 00:39:26.254 } 00:39:26.254 Got JSON-RPC error response 00:39:26.254 response: 00:39:26.254 { 00:39:26.254 "code": -22, 00:39:26.254 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:26.254 } 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:26.254 14:08:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.189 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:27.447 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.447 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:39:27.447 "name": "raid_bdev1", 00:39:27.447 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:27.447 "strip_size_kb": 0, 00:39:27.447 "state": "online", 00:39:27.447 "raid_level": "raid1", 00:39:27.447 "superblock": true, 00:39:27.447 "num_base_bdevs": 2, 00:39:27.447 "num_base_bdevs_discovered": 1, 00:39:27.447 "num_base_bdevs_operational": 1, 00:39:27.447 "base_bdevs_list": [ 00:39:27.447 { 00:39:27.447 "name": null, 00:39:27.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.447 "is_configured": false, 00:39:27.447 "data_offset": 0, 00:39:27.447 "data_size": 7936 00:39:27.447 }, 00:39:27.447 { 00:39:27.447 "name": "BaseBdev2", 00:39:27.447 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:27.447 "is_configured": true, 00:39:27.447 "data_offset": 256, 00:39:27.447 "data_size": 7936 00:39:27.447 } 00:39:27.447 ] 00:39:27.447 }' 00:39:27.447 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:39:27.447 14:08:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:27.705 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:27.705 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:39:27.705 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:39:27.705 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:39:27.705 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:39:27.706 "name": "raid_bdev1", 00:39:27.706 "uuid": "a4bda7bf-1129-4042-8b4b-ba103c4f86d6", 00:39:27.706 "strip_size_kb": 0, 00:39:27.706 "state": "online", 00:39:27.706 "raid_level": "raid1", 00:39:27.706 "superblock": true, 00:39:27.706 "num_base_bdevs": 2, 00:39:27.706 "num_base_bdevs_discovered": 1, 00:39:27.706 "num_base_bdevs_operational": 1, 00:39:27.706 "base_bdevs_list": [ 00:39:27.706 { 00:39:27.706 "name": null, 00:39:27.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:27.706 "is_configured": false, 00:39:27.706 "data_offset": 0, 00:39:27.706 "data_size": 7936 00:39:27.706 }, 00:39:27.706 { 00:39:27.706 "name": "BaseBdev2", 00:39:27.706 "uuid": "9a858199-2b28-5739-b823-244ea2b62029", 00:39:27.706 "is_configured": true, 00:39:27.706 "data_offset": 256, 00:39:27.706 "data_size": 7936 00:39:27.706 } 00:39:27.706 ] 00:39:27.706 }' 00:39:27.706 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99775 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99775 ']' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99775 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99775 00:39:27.964 killing process with pid 99775 00:39:27.964 Received shutdown signal, test time was about 60.000000 seconds 00:39:27.964 00:39:27.964 Latency(us) 00:39:27.964 [2024-10-09T14:08:34.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.964 [2024-10-09T14:08:34.515Z] =================================================================================================================== 00:39:27.964 [2024-10-09T14:08:34.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99775' 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99775 00:39:27.964 [2024-10-09 14:08:34.344591] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:27.964 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99775 00:39:27.964 [2024-10-09 14:08:34.344721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:27.964 [2024-10-09 14:08:34.344774] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:27.964 [2024-10-09 14:08:34.344785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:39:27.964 [2024-10-09 14:08:34.378299] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:28.223 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:39:28.223 00:39:28.223 real 0m16.053s 00:39:28.223 user 0m21.406s 00:39:28.223 sys 0m1.685s 00:39:28.223 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.223 ************************************ 00:39:28.223 END TEST raid_rebuild_test_sb_md_interleaved 00:39:28.223 ************************************ 00:39:28.223 14:08:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:28.223 14:08:34 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:39:28.223 14:08:34 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:39:28.223 14:08:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99775 ']' 00:39:28.223 14:08:34 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99775 00:39:28.223 14:08:34 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:39:28.223 00:39:28.223 real 10m18.458s 00:39:28.223 user 14m45.296s 00:39:28.223 sys 1m59.113s 00:39:28.223 14:08:34 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:28.223 14:08:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:28.223 ************************************ 00:39:28.223 END TEST bdev_raid 00:39:28.223 ************************************ 00:39:28.223 14:08:34 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:39:28.223 14:08:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:39:28.223 14:08:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:28.223 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:39:28.223 ************************************ 00:39:28.223 START TEST spdkcli_raid 00:39:28.223 ************************************ 00:39:28.223 14:08:34 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:39:28.482 * Looking for test storage... 00:39:28.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.482 14:08:34 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:28.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.482 --rc genhtml_branch_coverage=1 00:39:28.482 --rc genhtml_function_coverage=1 00:39:28.482 --rc genhtml_legend=1 00:39:28.482 --rc geninfo_all_blocks=1 00:39:28.482 --rc geninfo_unexecuted_blocks=1 00:39:28.482 00:39:28.482 ' 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:28.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.482 --rc genhtml_branch_coverage=1 00:39:28.482 --rc genhtml_function_coverage=1 00:39:28.482 --rc genhtml_legend=1 00:39:28.482 --rc geninfo_all_blocks=1 00:39:28.482 --rc geninfo_unexecuted_blocks=1 00:39:28.482 00:39:28.482 ' 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:28.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.482 --rc genhtml_branch_coverage=1 00:39:28.482 --rc genhtml_function_coverage=1 00:39:28.482 --rc genhtml_legend=1 00:39:28.482 --rc geninfo_all_blocks=1 00:39:28.482 --rc geninfo_unexecuted_blocks=1 00:39:28.482 00:39:28.482 ' 00:39:28.482 14:08:34 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:28.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.482 --rc genhtml_branch_coverage=1 00:39:28.482 --rc genhtml_function_coverage=1 00:39:28.482 --rc genhtml_legend=1 00:39:28.482 --rc geninfo_all_blocks=1 00:39:28.482 --rc geninfo_unexecuted_blocks=1 00:39:28.482 00:39:28.482 ' 00:39:28.482 14:08:34 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:39:28.482 14:08:34 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:39:28.482 14:08:34 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:39:28.482 14:08:34 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:39:28.483 14:08:34 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:39:28.483 14:08:34 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:39:28.483 14:08:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:28.483 14:08:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:28.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.483 14:08:35 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:39:28.483 14:08:35 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100437 00:39:28.483 14:08:35 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100437 00:39:28.483 14:08:35 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100437 ']' 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:28.483 14:08:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:28.744 [2024-10-09 14:08:35.143149] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:28.744 [2024-10-09 14:08:35.143626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100437 ] 00:39:29.003 [2024-10-09 14:08:35.326004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:29.003 [2024-10-09 14:08:35.375234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.003 [2024-10-09 14:08:35.375331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.567 14:08:36 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:29.567 14:08:36 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:39:29.567 14:08:36 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:39:29.567 14:08:36 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:29.567 14:08:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:29.824 14:08:36 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:39:29.825 14:08:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:29.825 14:08:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:29.825 14:08:36 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:39:29.825 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:39:29.825 ' 00:39:31.255 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:39:31.255 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:39:31.255 14:08:37 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:39:31.255 14:08:37 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:31.255 14:08:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:31.513 14:08:37 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:39:31.513 14:08:37 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:31.513 14:08:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:31.513 14:08:37 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:39:31.513 ' 00:39:32.450 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:39:32.709 14:08:39 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:39:32.709 14:08:39 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:32.709 14:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:32.709 14:08:39 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:39:32.709 14:08:39 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:32.709 14:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:32.709 14:08:39 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:39:32.709 14:08:39 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:39:33.277 14:08:39 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:39:33.277 14:08:39 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:39:33.277 14:08:39 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:39:33.277 14:08:39 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.277 14:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:33.277 14:08:39 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:39:33.277 14:08:39 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:33.277 14:08:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:33.277 14:08:39 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:39:33.277 ' 00:39:34.212 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:39:34.471 14:08:40 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:39:34.471 14:08:40 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:34.471 14:08:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:34.471 14:08:40 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:39:34.471 14:08:40 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:34.471 14:08:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:34.471 14:08:40 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:39:34.471 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:39:34.471 ' 00:39:35.849 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:39:35.849 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:39:35.849 14:08:42 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:39:35.849 14:08:42 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:35.849 14:08:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:36.108 14:08:42 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100437 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100437 ']' 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100437 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100437 00:39:36.108 killing process with pid 100437 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100437' 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100437 00:39:36.108 14:08:42 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100437 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100437 ']' 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100437 00:39:36.367 14:08:42 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100437 ']' 00:39:36.367 Process with pid 100437 is not found 00:39:36.367 14:08:42 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100437 00:39:36.367 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100437) - No such process 00:39:36.367 14:08:42 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100437 is not found' 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:39:36.367 14:08:42 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:39:36.367 ************************************ 00:39:36.367 END TEST spdkcli_raid 00:39:36.367 ************************************ 00:39:36.367 00:39:36.367 real 0m8.105s 00:39:36.367 user 0m17.211s 00:39:36.367 sys 0m1.121s 00:39:36.367 14:08:42 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:36.367 14:08:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:39:36.626 14:08:42 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:39:36.626 14:08:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:36.626 14:08:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:36.626 14:08:42 -- common/autotest_common.sh@10 -- # set +x 00:39:36.626 ************************************ 00:39:36.627 START TEST blockdev_raid5f 00:39:36.627 ************************************ 00:39:36.627 14:08:42 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:39:36.627 * Looking for test storage... 00:39:36.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.627 14:08:43 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:36.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.627 --rc genhtml_branch_coverage=1 00:39:36.627 --rc genhtml_function_coverage=1 00:39:36.627 --rc genhtml_legend=1 00:39:36.627 --rc geninfo_all_blocks=1 00:39:36.627 --rc geninfo_unexecuted_blocks=1 00:39:36.627 00:39:36.627 ' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:36.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.627 --rc genhtml_branch_coverage=1 00:39:36.627 --rc genhtml_function_coverage=1 00:39:36.627 --rc genhtml_legend=1 00:39:36.627 --rc geninfo_all_blocks=1 00:39:36.627 --rc geninfo_unexecuted_blocks=1 00:39:36.627 00:39:36.627 ' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:36.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.627 --rc genhtml_branch_coverage=1 00:39:36.627 --rc genhtml_function_coverage=1 00:39:36.627 --rc genhtml_legend=1 00:39:36.627 --rc geninfo_all_blocks=1 00:39:36.627 --rc geninfo_unexecuted_blocks=1 00:39:36.627 00:39:36.627 ' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:36.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.627 --rc genhtml_branch_coverage=1 00:39:36.627 --rc genhtml_function_coverage=1 00:39:36.627 --rc genhtml_legend=1 00:39:36.627 --rc geninfo_all_blocks=1 00:39:36.627 --rc geninfo_unexecuted_blocks=1 00:39:36.627 00:39:36.627 ' 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100695 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100695 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100695 ']' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.627 14:08:43 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:36.627 14:08:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:36.886 [2024-10-09 14:08:43.284038] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:36.886 [2024-10-09 14:08:43.285474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100695 ] 00:39:37.144 [2024-10-09 14:08:43.464808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.144 [2024-10-09 14:08:43.507969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:39:37.712 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:39:37.712 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:39:37.712 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.712 Malloc0 00:39:37.712 Malloc1 00:39:37.712 Malloc2 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.712 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.712 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.712 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "eda54f45-be58-471b-b808-45e163ba0a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eda54f45-be58-471b-b808-45e163ba0a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "eda54f45-be58-471b-b808-45e163ba0a8b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2df0dfb8-30c3-4172-8746-a9de5b0879b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c0c59cbe-609c-4bb6-9181-d1f30fdd969f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d1549773-236c-4e90-a85c-0fbb09f1b813",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:39:37.972 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100695 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100695 ']' 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100695 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100695 00:39:37.972 killing process with pid 100695 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100695' 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100695 00:39:37.972 14:08:44 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100695 00:39:38.539 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:38.539 14:08:44 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:39:38.539 14:08:44 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:39:38.539 14:08:44 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:38.539 14:08:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:38.539 ************************************ 00:39:38.539 START TEST bdev_hello_world 00:39:38.539 ************************************ 00:39:38.539 14:08:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:39:38.539 [2024-10-09 14:08:45.044728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:38.539 [2024-10-09 14:08:45.044934] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100733 ] 00:39:38.798 [2024-10-09 14:08:45.225368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.798 [2024-10-09 14:08:45.270332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.075 [2024-10-09 14:08:45.458387] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:39.075 [2024-10-09 14:08:45.458442] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:39:39.075 [2024-10-09 14:08:45.458461] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:39.075 [2024-10-09 14:08:45.458829] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:39.075 [2024-10-09 14:08:45.458980] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:39.075 [2024-10-09 14:08:45.459006] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:39.075 [2024-10-09 14:08:45.459063] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:39.075 00:39:39.075 [2024-10-09 14:08:45.459083] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:39.352 00:39:39.352 real 0m0.799s 00:39:39.352 user 0m0.418s 00:39:39.352 sys 0m0.265s 00:39:39.352 14:08:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:39.352 14:08:45 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:39.352 ************************************ 00:39:39.352 END TEST bdev_hello_world 00:39:39.352 ************************************ 00:39:39.352 14:08:45 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:39:39.352 14:08:45 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:39.352 14:08:45 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:39.352 14:08:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:39.352 ************************************ 00:39:39.352 START TEST bdev_bounds 00:39:39.352 ************************************ 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100762 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100762' 00:39:39.352 Process bdevio pid: 100762 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100762 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100762 ']' 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:39.352 14:08:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:39.352 [2024-10-09 14:08:45.872988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:39.352 [2024-10-09 14:08:45.873139] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100762 ] 00:39:39.610 [2024-10-09 14:08:46.037897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:39.611 [2024-10-09 14:08:46.086454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.611 [2024-10-09 14:08:46.086457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.611 [2024-10-09 14:08:46.086504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:39:40.546 14:08:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:40.546 14:08:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:39:40.546 14:08:46 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:40.546 I/O targets: 00:39:40.546 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:39:40.546 00:39:40.546 00:39:40.546 CUnit - A unit testing framework for C - Version 2.1-3 00:39:40.546 http://cunit.sourceforge.net/ 00:39:40.546 00:39:40.546 00:39:40.546 Suite: bdevio tests on: raid5f 00:39:40.546 Test: blockdev write read block ...passed 00:39:40.546 Test: blockdev write zeroes read block ...passed 00:39:40.546 Test: blockdev write zeroes read no split ...passed 00:39:40.546 Test: blockdev write zeroes read split ...passed 00:39:40.805 Test: blockdev write zeroes read split partial ...passed 00:39:40.805 Test: blockdev reset ...passed 00:39:40.805 Test: blockdev write read 8 blocks ...passed 00:39:40.805 Test: blockdev write read size > 128k ...passed 00:39:40.805 Test: blockdev write read invalid size ...passed 00:39:40.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:40.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:40.805 Test: blockdev write read max offset ...passed 00:39:40.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:40.805 Test: blockdev writev readv 8 blocks ...passed 00:39:40.805 Test: blockdev writev readv 30 x 1block ...passed 00:39:40.805 Test: blockdev writev readv block ...passed 00:39:40.805 Test: blockdev writev readv size > 128k ...passed 00:39:40.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:40.805 Test: blockdev comparev and writev ...passed 00:39:40.805 Test: blockdev nvme passthru rw ...passed 00:39:40.805 Test: blockdev nvme passthru vendor specific ...passed 00:39:40.805 Test: blockdev nvme admin passthru ...passed 00:39:40.805 Test: blockdev copy ...passed 00:39:40.805 00:39:40.805 Run Summary: Type Total Ran Passed Failed Inactive 00:39:40.805 suites 1 1 n/a 0 0 00:39:40.805 tests 23 23 23 0 0 00:39:40.805 asserts 130 130 130 0 n/a 00:39:40.805 00:39:40.805 Elapsed time = 0.334 seconds 00:39:40.805 0 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100762 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100762 ']' 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100762 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100762 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:40.805 killing process with pid 100762 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100762' 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100762 00:39:40.805 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100762 00:39:41.064 14:08:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:39:41.064 00:39:41.064 real 0m1.704s 00:39:41.064 user 0m4.334s 00:39:41.064 sys 0m0.377s 00:39:41.064 ************************************ 00:39:41.064 END TEST bdev_bounds 00:39:41.064 ************************************ 00:39:41.064 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:41.064 14:08:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:41.064 14:08:47 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:39:41.064 14:08:47 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:39:41.064 14:08:47 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:41.064 14:08:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:41.064 ************************************ 00:39:41.064 START TEST bdev_nbd 00:39:41.064 ************************************ 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:39:41.064 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100816 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100816 /var/tmp/spdk-nbd.sock 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100816 ']' 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:41.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:41.065 14:08:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:41.324 [2024-10-09 14:08:47.636993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:41.324 [2024-10-09 14:08:47.637135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.324 [2024-10-09 14:08:47.797939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:41.324 [2024-10-09 14:08:47.844085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:42.260 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:42.519 1+0 records in 00:39:42.519 1+0 records out 00:39:42.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349338 s, 11.7 MB/s 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:42.519 14:08:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:42.778 { 00:39:42.778 "nbd_device": "/dev/nbd0", 00:39:42.778 "bdev_name": "raid5f" 00:39:42.778 } 00:39:42.778 ]' 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:42.778 { 00:39:42.778 "nbd_device": "/dev/nbd0", 00:39:42.778 "bdev_name": "raid5f" 00:39:42.778 } 00:39:42.778 ]' 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:42.778 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.035 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:43.293 14:08:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:39:43.552 /dev/nbd0 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:43.809 1+0 records in 00:39:43.809 1+0 records out 00:39:43.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473028 s, 8.7 MB/s 00:39:43.809 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:43.810 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:44.068 { 00:39:44.068 "nbd_device": "/dev/nbd0", 00:39:44.068 "bdev_name": "raid5f" 00:39:44.068 } 00:39:44.068 ]' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:44.068 { 00:39:44.068 "nbd_device": "/dev/nbd0", 00:39:44.068 "bdev_name": "raid5f" 00:39:44.068 } 00:39:44.068 ]' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:44.068 256+0 records in 00:39:44.068 256+0 records out 00:39:44.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00750381 s, 140 MB/s 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:44.068 256+0 records in 00:39:44.068 256+0 records out 00:39:44.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336948 s, 31.1 MB/s 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:44.068 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.326 14:08:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:44.585 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:44.585 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:44.585 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:39:44.843 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:45.101 malloc_lvol_verify 00:39:45.101 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:45.360 355efc80-cb38-4eb8-a474-286396d69412 00:39:45.360 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:45.360 1a996611-01c7-431e-a79f-dbf838dd96f1 00:39:45.360 14:08:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:45.618 /dev/nbd0 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:39:45.618 mke2fs 1.47.0 (5-Feb-2023) 00:39:45.618 Discarding device blocks: 0/4096 done 00:39:45.618 Creating filesystem with 4096 1k blocks and 1024 inodes 00:39:45.618 00:39:45.618 Allocating group tables: 0/1 done 00:39:45.618 Writing inode tables: 0/1 done 00:39:45.618 Creating journal (1024 blocks): done 00:39:45.618 Writing superblocks and filesystem accounting information: 0/1 done 00:39:45.618 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:45.618 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100816 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100816 ']' 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100816 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100816 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:45.876 killing process with pid 100816 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100816' 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100816 00:39:45.876 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100816 00:39:46.443 14:08:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:39:46.443 00:39:46.443 real 0m5.145s 00:39:46.443 user 0m7.662s 00:39:46.443 sys 0m1.487s 00:39:46.443 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:46.443 14:08:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:46.443 ************************************ 00:39:46.443 END TEST bdev_nbd 00:39:46.443 ************************************ 00:39:46.443 14:08:52 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:39:46.443 14:08:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:39:46.443 14:08:52 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:39:46.443 14:08:52 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:39:46.443 14:08:52 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:46.443 14:08:52 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:46.444 14:08:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:46.444 ************************************ 00:39:46.444 START TEST bdev_fio 00:39:46.444 ************************************ 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:39:46.444 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:46.444 ************************************ 00:39:46.444 START TEST bdev_fio_rw_verify 00:39:46.444 ************************************ 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:46.444 14:08:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:39:46.703 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:39:46.703 fio-3.35 00:39:46.703 Starting 1 thread 00:39:58.920 00:39:58.920 job_raid5f: (groupid=0, jobs=1): err= 0: pid=101007: Wed Oct 9 14:09:03 2024 00:39:58.920 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(428MiB/10001msec) 00:39:58.920 slat (usec): min=17, max=341, avg=21.25, stdev= 4.26 00:39:58.920 clat (usec): min=10, max=705, avg=144.96, stdev=53.16 00:39:58.920 lat (usec): min=30, max=725, avg=166.21, stdev=54.00 00:39:58.920 clat percentiles (usec): 00:39:58.920 | 50.000th=[ 145], 99.000th=[ 273], 99.900th=[ 334], 99.990th=[ 494], 00:39:58.920 | 99.999th=[ 627] 00:39:58.920 write: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(446MiB/9880msec); 0 zone resets 00:39:58.920 slat (usec): min=8, max=339, avg=18.92, stdev= 4.98 00:39:58.920 clat (usec): min=58, max=1340, avg=331.02, stdev=58.56 00:39:58.920 lat (usec): min=74, max=1458, avg=349.94, stdev=60.47 00:39:58.920 clat percentiles (usec): 00:39:58.920 | 50.000th=[ 330], 99.000th=[ 537], 99.900th=[ 685], 99.990th=[ 1156], 00:39:58.920 | 99.999th=[ 1270] 00:39:58.920 bw ( KiB/s): min=33856, max=49416, per=98.57%, avg=45516.63, stdev=3300.60, samples=19 00:39:58.920 iops : min= 8464, max=12354, avg=11379.16, stdev=825.15, samples=19 00:39:58.920 lat (usec) : 20=0.01%, 50=0.01%, 100=11.28%, 250=39.43%, 500=48.37% 00:39:58.920 lat (usec) : 750=0.88%, 1000=0.02% 00:39:58.920 lat (msec) : 2=0.02% 00:39:58.920 cpu : usr=98.70%, sys=0.57%, ctx=23, majf=0, minf=12252 00:39:58.920 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:58.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.920 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:58.920 issued rwts: total=109625,114053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:58.920 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:58.920 00:39:58.920 Run status group 0 (all jobs): 00:39:58.920 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=428MiB (449MB), run=10001-10001msec 00:39:58.920 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=446MiB (467MB), run=9880-9880msec 00:39:58.920 ----------------------------------------------------- 00:39:58.920 Suppressions used: 00:39:58.920 count bytes template 00:39:58.920 1 7 /usr/src/fio/parse.c 00:39:58.920 995 95520 /usr/src/fio/iolog.c 00:39:58.920 1 8 libtcmalloc_minimal.so 00:39:58.920 1 904 libcrypto.so 00:39:58.920 ----------------------------------------------------- 00:39:58.920 00:39:58.920 00:39:58.920 real 0m11.215s 00:39:58.920 user 0m11.243s 00:39:58.920 sys 0m0.880s 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:39:58.920 ************************************ 00:39:58.920 END TEST bdev_fio_rw_verify 00:39:58.920 ************************************ 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:39:58.920 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "eda54f45-be58-471b-b808-45e163ba0a8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "eda54f45-be58-471b-b808-45e163ba0a8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "eda54f45-be58-471b-b808-45e163ba0a8b",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2df0dfb8-30c3-4172-8746-a9de5b0879b5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c0c59cbe-609c-4bb6-9181-d1f30fdd969f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d1549773-236c-4e90-a85c-0fbb09f1b813",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:39:58.921 /home/vagrant/spdk_repo/spdk 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:39:58.921 00:39:58.921 real 0m11.464s 00:39:58.921 user 0m11.368s 00:39:58.921 sys 0m0.982s 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:58.921 14:09:04 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:39:58.921 ************************************ 00:39:58.921 END TEST bdev_fio 00:39:58.921 ************************************ 00:39:58.921 14:09:04 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:58.921 14:09:04 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:58.921 14:09:04 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:39:58.921 14:09:04 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:58.921 14:09:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:39:58.921 ************************************ 00:39:58.921 START TEST bdev_verify 00:39:58.921 ************************************ 00:39:58.921 14:09:04 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:58.921 [2024-10-09 14:09:04.360877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:39:58.921 [2024-10-09 14:09:04.361002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101154 ] 00:39:58.921 [2024-10-09 14:09:04.524542] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:58.921 [2024-10-09 14:09:04.571276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.921 [2024-10-09 14:09:04.571371] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:39:58.921 Running I/O for 5 seconds... 00:40:00.424 11974.00 IOPS, 46.77 MiB/s [2024-10-09T14:09:07.912Z] 12989.00 IOPS, 50.74 MiB/s [2024-10-09T14:09:08.848Z] 13779.33 IOPS, 53.83 MiB/s [2024-10-09T14:09:09.784Z] 14519.50 IOPS, 56.72 MiB/s [2024-10-09T14:09:10.043Z] 15233.00 IOPS, 59.50 MiB/s 00:40:03.492 Latency(us) 00:40:03.492 [2024-10-09T14:09:10.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.492 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:03.492 Verification LBA range: start 0x0 length 0x2000 00:40:03.492 raid5f : 5.01 7608.27 29.72 0.00 0.00 25194.91 202.85 24591.60 00:40:03.492 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:03.492 Verification LBA range: start 0x2000 length 0x2000 00:40:03.492 raid5f : 5.01 7596.83 29.68 0.00 0.00 25234.47 207.73 24466.77 00:40:03.492 [2024-10-09T14:09:10.043Z] =================================================================================================================== 00:40:03.492 [2024-10-09T14:09:10.043Z] Total : 15205.11 59.39 0.00 0.00 25214.69 202.85 24591.60 00:40:03.751 00:40:03.751 real 0m5.766s 00:40:03.751 user 0m10.689s 00:40:03.751 sys 0m0.261s 00:40:03.751 14:09:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:03.751 ************************************ 00:40:03.751 14:09:10 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:03.751 END TEST bdev_verify 00:40:03.751 ************************************ 00:40:03.751 14:09:10 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:03.751 14:09:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:40:03.751 14:09:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:03.751 14:09:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:03.751 ************************************ 00:40:03.751 START TEST bdev_verify_big_io 00:40:03.751 ************************************ 00:40:03.751 14:09:10 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:03.751 [2024-10-09 14:09:10.187409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:03.751 [2024-10-09 14:09:10.187526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101236 ] 00:40:04.010 [2024-10-09 14:09:10.344886] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:04.010 [2024-10-09 14:09:10.388674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.010 [2024-10-09 14:09:10.388746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.269 Running I/O for 5 seconds... 00:40:06.580 756.00 IOPS, 47.25 MiB/s [2024-10-09T14:09:14.067Z] 761.00 IOPS, 47.56 MiB/s [2024-10-09T14:09:15.004Z] 846.00 IOPS, 52.88 MiB/s [2024-10-09T14:09:15.941Z] 888.50 IOPS, 55.53 MiB/s [2024-10-09T14:09:15.941Z] 914.00 IOPS, 57.12 MiB/s 00:40:09.390 Latency(us) 00:40:09.390 [2024-10-09T14:09:15.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:09.390 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:09.390 Verification LBA range: start 0x0 length 0x200 00:40:09.390 raid5f : 5.25 459.98 28.75 0.00 0.00 6787677.01 185.30 335544.32 00:40:09.390 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:09.390 Verification LBA range: start 0x200 length 0x200 00:40:09.390 raid5f : 5.26 458.05 28.63 0.00 0.00 6900430.58 149.21 335544.32 00:40:09.390 [2024-10-09T14:09:15.941Z] =================================================================================================================== 00:40:09.390 [2024-10-09T14:09:15.941Z] Total : 918.02 57.38 0.00 0.00 6844030.42 149.21 335544.32 00:40:09.648 00:40:09.648 real 0m6.004s 00:40:09.648 user 0m11.190s 00:40:09.648 sys 0m0.242s 00:40:09.648 14:09:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.648 14:09:16 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:09.648 ************************************ 00:40:09.648 END TEST bdev_verify_big_io 00:40:09.648 ************************************ 00:40:09.649 14:09:16 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:09.649 14:09:16 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:40:09.649 14:09:16 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.649 14:09:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:09.649 ************************************ 00:40:09.649 START TEST bdev_write_zeroes 00:40:09.649 ************************************ 00:40:09.649 14:09:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:09.908 [2024-10-09 14:09:16.279861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:09.908 [2024-10-09 14:09:16.280038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101323 ] 00:40:10.167 [2024-10-09 14:09:16.458203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.167 [2024-10-09 14:09:16.504636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:10.167 Running I/O for 1 seconds... 00:40:11.543 26775.00 IOPS, 104.59 MiB/s 00:40:11.543 Latency(us) 00:40:11.543 [2024-10-09T14:09:18.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:11.543 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:11.543 raid5f : 1.01 26735.40 104.44 0.00 0.00 4772.50 1435.55 6553.60 00:40:11.543 [2024-10-09T14:09:18.094Z] =================================================================================================================== 00:40:11.543 [2024-10-09T14:09:18.094Z] Total : 26735.40 104.44 0.00 0.00 4772.50 1435.55 6553.60 00:40:11.543 00:40:11.543 real 0m1.807s 00:40:11.543 user 0m1.414s 00:40:11.543 sys 0m0.271s 00:40:11.543 14:09:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:11.543 ************************************ 00:40:11.543 END TEST bdev_write_zeroes 00:40:11.543 ************************************ 00:40:11.543 14:09:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:11.543 14:09:18 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:11.543 14:09:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:40:11.543 14:09:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:11.543 14:09:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:11.543 ************************************ 00:40:11.543 START TEST bdev_json_nonenclosed 00:40:11.543 ************************************ 00:40:11.543 14:09:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:11.801 [2024-10-09 14:09:18.142391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:11.801 [2024-10-09 14:09:18.142586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101360 ] 00:40:11.801 [2024-10-09 14:09:18.316005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.060 [2024-10-09 14:09:18.363373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.060 [2024-10-09 14:09:18.363481] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:12.060 [2024-10-09 14:09:18.363510] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:12.060 [2024-10-09 14:09:18.363525] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:12.060 00:40:12.060 real 0m0.451s 00:40:12.060 user 0m0.184s 00:40:12.060 sys 0m0.163s 00:40:12.060 14:09:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:12.060 14:09:18 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:40:12.060 ************************************ 00:40:12.060 END TEST bdev_json_nonenclosed 00:40:12.060 ************************************ 00:40:12.060 14:09:18 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:12.060 14:09:18 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:40:12.060 14:09:18 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:12.060 14:09:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:12.060 ************************************ 00:40:12.060 START TEST bdev_json_nonarray 00:40:12.060 ************************************ 00:40:12.060 14:09:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:12.318 [2024-10-09 14:09:18.653022] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:40:12.318 [2024-10-09 14:09:18.653200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101385 ] 00:40:12.318 [2024-10-09 14:09:18.832993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.577 [2024-10-09 14:09:18.878736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.577 [2024-10-09 14:09:18.878855] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:12.577 [2024-10-09 14:09:18.878885] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:12.577 [2024-10-09 14:09:18.878901] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:12.577 00:40:12.577 real 0m0.457s 00:40:12.577 user 0m0.192s 00:40:12.577 sys 0m0.160s 00:40:12.577 ************************************ 00:40:12.577 END TEST bdev_json_nonarray 00:40:12.577 ************************************ 00:40:12.577 14:09:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:12.577 14:09:18 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:40:12.577 14:09:19 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:40:12.577 00:40:12.577 real 0m36.126s 00:40:12.577 user 0m49.606s 00:40:12.577 sys 0m5.214s 00:40:12.577 14:09:19 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:12.577 14:09:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:40:12.577 ************************************ 00:40:12.577 END TEST blockdev_raid5f 00:40:12.577 ************************************ 00:40:12.577 14:09:19 -- spdk/autotest.sh@194 -- # uname -s 00:40:12.577 14:09:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:40:12.577 14:09:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:40:12.577 14:09:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:40:12.577 14:09:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:40:12.577 14:09:19 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:40:12.577 14:09:19 -- spdk/autotest.sh@256 -- # timing_exit lib 00:40:12.577 14:09:19 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:12.577 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:40:12.836 14:09:19 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:40:12.836 14:09:19 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:40:12.836 14:09:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:40:12.836 14:09:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:40:12.836 14:09:19 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:40:12.836 14:09:19 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:40:12.836 14:09:19 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:40:12.836 14:09:19 -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:12.836 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:40:12.836 14:09:19 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:40:12.836 14:09:19 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:12.836 14:09:19 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:12.836 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:40:14.734 INFO: APP EXITING 00:40:14.734 INFO: killing all VMs 00:40:14.734 INFO: killing vhost app 00:40:14.734 INFO: EXIT DONE 00:40:14.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:14.993 Waiting for block devices as requested 00:40:14.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:40:15.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:16.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:16.189 Cleaning 00:40:16.189 Removing: /var/run/dpdk/spdk0/config 00:40:16.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:16.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:16.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:16.189 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:16.189 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:16.189 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:16.189 Removing: /dev/shm/spdk_tgt_trace.pid69311 00:40:16.189 Removing: /var/run/dpdk/spdk0 00:40:16.189 Removing: /var/run/dpdk/spdk_pid100437 00:40:16.189 Removing: /var/run/dpdk/spdk_pid100695 00:40:16.189 Removing: /var/run/dpdk/spdk_pid100733 00:40:16.189 Removing: /var/run/dpdk/spdk_pid100762 00:40:16.189 Removing: /var/run/dpdk/spdk_pid100993 00:40:16.189 Removing: /var/run/dpdk/spdk_pid101154 00:40:16.189 Removing: /var/run/dpdk/spdk_pid101236 00:40:16.189 Removing: /var/run/dpdk/spdk_pid101323 00:40:16.189 Removing: /var/run/dpdk/spdk_pid101360 00:40:16.189 Removing: /var/run/dpdk/spdk_pid101385 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69141 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69311 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69522 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69604 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69638 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69750 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69768 00:40:16.189 Removing: /var/run/dpdk/spdk_pid69956 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70029 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70114 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70214 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70300 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70340 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70371 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70447 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70555 00:40:16.189 Removing: /var/run/dpdk/spdk_pid70996 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71055 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71112 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71128 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71197 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71213 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71282 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71298 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71351 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71369 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71416 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71435 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71567 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71604 00:40:16.189 Removing: /var/run/dpdk/spdk_pid71687 00:40:16.189 Removing: /var/run/dpdk/spdk_pid72885 00:40:16.189 Removing: /var/run/dpdk/spdk_pid73080 00:40:16.189 Removing: /var/run/dpdk/spdk_pid73215 00:40:16.189 Removing: /var/run/dpdk/spdk_pid73825 00:40:16.189 Removing: /var/run/dpdk/spdk_pid74025 00:40:16.189 Removing: /var/run/dpdk/spdk_pid74160 00:40:16.189 Removing: /var/run/dpdk/spdk_pid74770 00:40:16.189 Removing: /var/run/dpdk/spdk_pid75086 00:40:16.189 Removing: /var/run/dpdk/spdk_pid75216 00:40:16.189 Removing: /var/run/dpdk/spdk_pid76568 00:40:16.189 Removing: /var/run/dpdk/spdk_pid76797 00:40:16.189 Removing: /var/run/dpdk/spdk_pid76932 00:40:16.189 Removing: /var/run/dpdk/spdk_pid78290 00:40:16.189 Removing: /var/run/dpdk/spdk_pid78532 00:40:16.448 Removing: /var/run/dpdk/spdk_pid78661 00:40:16.448 Removing: /var/run/dpdk/spdk_pid80008 00:40:16.448 Removing: /var/run/dpdk/spdk_pid80442 00:40:16.448 Removing: /var/run/dpdk/spdk_pid80576 00:40:16.448 Removing: /var/run/dpdk/spdk_pid82007 00:40:16.448 Removing: /var/run/dpdk/spdk_pid82255 00:40:16.448 Removing: /var/run/dpdk/spdk_pid82384 00:40:16.448 Removing: /var/run/dpdk/spdk_pid83827 00:40:16.448 Removing: /var/run/dpdk/spdk_pid84081 00:40:16.448 Removing: /var/run/dpdk/spdk_pid84210 00:40:16.448 Removing: /var/run/dpdk/spdk_pid85653 00:40:16.448 Removing: /var/run/dpdk/spdk_pid86131 00:40:16.448 Removing: /var/run/dpdk/spdk_pid86260 00:40:16.448 Removing: /var/run/dpdk/spdk_pid86393 00:40:16.448 Removing: /var/run/dpdk/spdk_pid86809 00:40:16.448 Removing: /var/run/dpdk/spdk_pid87541 00:40:16.448 Removing: /var/run/dpdk/spdk_pid87906 00:40:16.448 Removing: /var/run/dpdk/spdk_pid88578 00:40:16.448 Removing: /var/run/dpdk/spdk_pid89012 00:40:16.448 Removing: /var/run/dpdk/spdk_pid89759 00:40:16.448 Removing: /var/run/dpdk/spdk_pid90157 00:40:16.448 Removing: /var/run/dpdk/spdk_pid92082 00:40:16.448 Removing: /var/run/dpdk/spdk_pid92509 00:40:16.448 Removing: /var/run/dpdk/spdk_pid92938 00:40:16.448 Removing: /var/run/dpdk/spdk_pid94977 00:40:16.448 Removing: /var/run/dpdk/spdk_pid95446 00:40:16.448 Removing: /var/run/dpdk/spdk_pid95951 00:40:16.448 Removing: /var/run/dpdk/spdk_pid96985 00:40:16.448 Removing: /var/run/dpdk/spdk_pid97301 00:40:16.448 Removing: /var/run/dpdk/spdk_pid98221 00:40:16.448 Removing: /var/run/dpdk/spdk_pid98537 00:40:16.448 Removing: /var/run/dpdk/spdk_pid99459 00:40:16.448 Removing: /var/run/dpdk/spdk_pid99775 00:40:16.448 Clean 00:40:16.448 14:09:22 -- common/autotest_common.sh@1451 -- # return 0 00:40:16.449 14:09:22 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:16.449 14:09:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:16.449 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:40:16.449 14:09:22 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:16.449 14:09:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:16.449 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:40:16.708 14:09:22 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:16.708 14:09:23 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:40:16.708 14:09:23 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:40:16.708 14:09:23 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:16.708 14:09:23 -- spdk/autotest.sh@394 -- # hostname 00:40:16.708 14:09:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:40:16.708 geninfo: WARNING: invalid characters removed from testname! 00:40:38.631 14:09:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:41.162 14:09:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:43.062 14:09:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:45.627 14:09:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:47.531 14:09:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:50.066 14:09:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:52.601 14:09:58 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:52.601 14:09:58 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:40:52.601 14:09:58 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:40:52.601 14:09:58 -- common/autotest_common.sh@1681 -- $ lcov --version 00:40:52.601 14:09:59 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:40:52.601 14:09:59 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:40:52.601 14:09:59 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:40:52.601 14:09:59 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:40:52.601 14:09:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:40:52.601 14:09:59 -- scripts/common.sh@336 -- $ read -ra ver1 00:40:52.601 14:09:59 -- scripts/common.sh@337 -- $ IFS=.-: 00:40:52.601 14:09:59 -- scripts/common.sh@337 -- $ read -ra ver2 00:40:52.601 14:09:59 -- scripts/common.sh@338 -- $ local 'op=<' 00:40:52.601 14:09:59 -- scripts/common.sh@340 -- $ ver1_l=2 00:40:52.601 14:09:59 -- scripts/common.sh@341 -- $ ver2_l=1 00:40:52.601 14:09:59 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:40:52.601 14:09:59 -- scripts/common.sh@344 -- $ case "$op" in 00:40:52.601 14:09:59 -- scripts/common.sh@345 -- $ : 1 00:40:52.601 14:09:59 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:40:52.601 14:09:59 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:52.601 14:09:59 -- scripts/common.sh@365 -- $ decimal 1 00:40:52.601 14:09:59 -- scripts/common.sh@353 -- $ local d=1 00:40:52.601 14:09:59 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:40:52.601 14:09:59 -- scripts/common.sh@355 -- $ echo 1 00:40:52.601 14:09:59 -- scripts/common.sh@365 -- $ ver1[v]=1 00:40:52.601 14:09:59 -- scripts/common.sh@366 -- $ decimal 2 00:40:52.601 14:09:59 -- scripts/common.sh@353 -- $ local d=2 00:40:52.601 14:09:59 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:40:52.601 14:09:59 -- scripts/common.sh@355 -- $ echo 2 00:40:52.601 14:09:59 -- scripts/common.sh@366 -- $ ver2[v]=2 00:40:52.601 14:09:59 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:40:52.601 14:09:59 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:40:52.601 14:09:59 -- scripts/common.sh@368 -- $ return 0 00:40:52.601 14:09:59 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:52.601 14:09:59 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:40:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.601 --rc genhtml_branch_coverage=1 00:40:52.601 --rc genhtml_function_coverage=1 00:40:52.601 --rc genhtml_legend=1 00:40:52.601 --rc geninfo_all_blocks=1 00:40:52.601 --rc geninfo_unexecuted_blocks=1 00:40:52.601 00:40:52.601 ' 00:40:52.601 14:09:59 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:40:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.601 --rc genhtml_branch_coverage=1 00:40:52.601 --rc genhtml_function_coverage=1 00:40:52.601 --rc genhtml_legend=1 00:40:52.601 --rc geninfo_all_blocks=1 00:40:52.601 --rc geninfo_unexecuted_blocks=1 00:40:52.601 00:40:52.601 ' 00:40:52.601 14:09:59 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:40:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.601 --rc genhtml_branch_coverage=1 00:40:52.601 --rc genhtml_function_coverage=1 00:40:52.601 --rc genhtml_legend=1 00:40:52.601 --rc geninfo_all_blocks=1 00:40:52.601 --rc geninfo_unexecuted_blocks=1 00:40:52.601 00:40:52.601 ' 00:40:52.601 14:09:59 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:40:52.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:52.601 --rc genhtml_branch_coverage=1 00:40:52.601 --rc genhtml_function_coverage=1 00:40:52.601 --rc genhtml_legend=1 00:40:52.601 --rc geninfo_all_blocks=1 00:40:52.601 --rc geninfo_unexecuted_blocks=1 00:40:52.601 00:40:52.601 ' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:52.601 14:09:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:40:52.601 14:09:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:52.601 14:09:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:52.601 14:09:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:52.601 14:09:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.601 14:09:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.601 14:09:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.601 14:09:59 -- paths/export.sh@5 -- $ export PATH 00:40:52.601 14:09:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:52.601 14:09:59 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:40:52.601 14:09:59 -- common/autobuild_common.sh@479 -- $ date +%s 00:40:52.601 14:09:59 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1728482999.XXXXXX 00:40:52.601 14:09:59 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1728482999.q84z2X 00:40:52.601 14:09:59 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:40:52.601 14:09:59 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:40:52.601 14:09:59 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@495 -- $ get_config_params 00:40:52.601 14:09:59 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:40:52.601 14:09:59 -- common/autotest_common.sh@10 -- $ set +x 00:40:52.601 14:09:59 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:40:52.601 14:09:59 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:40:52.601 14:09:59 -- pm/common@17 -- $ local monitor 00:40:52.601 14:09:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:52.601 14:09:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:52.601 14:09:59 -- pm/common@25 -- $ sleep 1 00:40:52.601 14:09:59 -- pm/common@21 -- $ date +%s 00:40:52.601 14:09:59 -- pm/common@21 -- $ date +%s 00:40:52.601 14:09:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728482999 00:40:52.601 14:09:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728482999 00:40:52.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728482999_collect-cpu-load.pm.log 00:40:52.860 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728482999_collect-vmstat.pm.log 00:40:53.796 14:10:00 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:40:53.796 14:10:00 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:40:53.796 14:10:00 -- spdk/autopackage.sh@14 -- $ timing_finish 00:40:53.796 14:10:00 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:53.796 14:10:00 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:53.796 14:10:00 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:53.796 14:10:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:53.796 14:10:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:53.796 14:10:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:53.796 14:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:53.796 14:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:40:53.796 14:10:00 -- pm/common@44 -- $ pid=102855 00:40:53.796 14:10:00 -- pm/common@50 -- $ kill -TERM 102855 00:40:53.796 14:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:53.796 14:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:40:53.796 14:10:00 -- pm/common@44 -- $ pid=102856 00:40:53.796 14:10:00 -- pm/common@50 -- $ kill -TERM 102856 00:40:53.796 + [[ -n 6008 ]] 00:40:53.796 + sudo kill 6008 00:40:53.806 [Pipeline] } 00:40:53.822 [Pipeline] // timeout 00:40:53.828 [Pipeline] } 00:40:53.842 [Pipeline] // stage 00:40:53.847 [Pipeline] } 00:40:53.862 [Pipeline] // catchError 00:40:53.871 [Pipeline] stage 00:40:53.873 [Pipeline] { (Stop VM) 00:40:53.886 [Pipeline] sh 00:40:54.167 + vagrant halt 00:40:57.455 ==> default: Halting domain... 00:41:04.028 [Pipeline] sh 00:41:04.309 + vagrant destroy -f 00:41:07.594 ==> default: Removing domain... 00:41:07.605 [Pipeline] sh 00:41:07.949 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:41:07.985 [Pipeline] } 00:41:08.000 [Pipeline] // stage 00:41:08.005 [Pipeline] } 00:41:08.019 [Pipeline] // dir 00:41:08.024 [Pipeline] } 00:41:08.036 [Pipeline] // wrap 00:41:08.041 [Pipeline] } 00:41:08.052 [Pipeline] // catchError 00:41:08.059 [Pipeline] stage 00:41:08.060 [Pipeline] { (Epilogue) 00:41:08.070 [Pipeline] sh 00:41:08.346 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:13.623 [Pipeline] catchError 00:41:13.625 [Pipeline] { 00:41:13.637 [Pipeline] sh 00:41:13.921 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:17.207 Artifacts sizes are good 00:41:17.216 [Pipeline] } 00:41:17.230 [Pipeline] // catchError 00:41:17.242 [Pipeline] archiveArtifacts 00:41:17.249 Archiving artifacts 00:41:17.385 [Pipeline] cleanWs 00:41:17.419 [WS-CLEANUP] Deleting project workspace... 00:41:17.419 [WS-CLEANUP] Deferred wipeout is used... 00:41:17.425 [WS-CLEANUP] done 00:41:17.427 [Pipeline] } 00:41:17.444 [Pipeline] // stage 00:41:17.449 [Pipeline] } 00:41:17.463 [Pipeline] // node 00:41:17.468 [Pipeline] End of Pipeline 00:41:17.500 Finished: SUCCESS